00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 1705 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 2971 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.081 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.082 The recommended git tool is: git 00:00:00.082 using credential 00000000-0000-0000-0000-000000000002 00:00:00.083 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.125 Fetching changes from the remote Git repository 00:00:00.127 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.174 Using shallow fetch with depth 1 00:00:00.174 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.174 > git --version # timeout=10 00:00:00.190 > git --version # 'git version 2.39.2' 00:00:00.190 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.191 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.191 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.222 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.233 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.245 Checking out Revision d55dd09e9e6d4661df5d1073790609767cbcb60c (FETCH_HEAD) 00:00:06.245 > git config core.sparsecheckout # timeout=10 00:00:06.255 > git read-tree -mu HEAD # timeout=10 00:00:06.270 > git checkout -f d55dd09e9e6d4661df5d1073790609767cbcb60c # timeout=5 00:00:06.287 Commit message: "ansible/roles/custom_facts: Add subsystem info to VMDs' nvmes" 00:00:06.287 > git rev-list --no-walk d55dd09e9e6d4661df5d1073790609767cbcb60c # timeout=10 00:00:06.367 [Pipeline] Start of Pipeline 00:00:06.380 [Pipeline] library 00:00:06.382 Loading library shm_lib@master 00:00:06.382 Library shm_lib@master is cached. Copying from home. 00:00:06.399 [Pipeline] node 00:00:06.416 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.419 [Pipeline] { 00:00:06.432 [Pipeline] catchError 00:00:06.434 [Pipeline] { 00:00:06.454 [Pipeline] wrap 00:00:06.464 [Pipeline] { 00:00:06.471 [Pipeline] stage 00:00:06.473 [Pipeline] { (Prologue) 00:00:06.653 [Pipeline] sh 00:00:06.938 + logger -p user.info -t JENKINS-CI 00:00:06.958 [Pipeline] echo 00:00:06.959 Node: GP8 00:00:06.967 [Pipeline] sh 00:00:07.259 [Pipeline] setCustomBuildProperty 00:00:07.270 [Pipeline] echo 00:00:07.271 Cleanup processes 00:00:07.275 [Pipeline] sh 00:00:07.557 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.816 3082875 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.829 [Pipeline] sh 00:00:08.112 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.113 ++ grep -v 'sudo pgrep' 00:00:08.113 ++ awk '{print $1}' 00:00:08.113 + sudo kill -9 00:00:08.113 + true 00:00:08.128 [Pipeline] cleanWs 00:00:08.138 [WS-CLEANUP] Deleting project workspace... 00:00:08.138 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.145 [WS-CLEANUP] done 00:00:08.150 [Pipeline] setCustomBuildProperty 00:00:08.171 [Pipeline] sh 00:00:08.456 + sudo git config --global --replace-all safe.directory '*' 00:00:08.541 [Pipeline] nodesByLabel 00:00:08.543 Found a total of 1 nodes with the 'sorcerer' label 00:00:08.554 [Pipeline] httpRequest 00:00:08.561 HttpMethod: GET 00:00:08.561 URL: http://10.211.164.101/packages/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:08.568 Sending request to url: http://10.211.164.101/packages/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:08.572 Response Code: HTTP/1.1 200 OK 00:00:08.573 Success: Status code 200 is in the accepted range: 200,404 00:00:08.573 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:09.311 [Pipeline] sh 00:00:09.595 + tar --no-same-owner -xf jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:09.871 [Pipeline] httpRequest 00:00:09.876 HttpMethod: GET 00:00:09.876 URL: http://10.211.164.101/packages/spdk_26d44a121d9e45b13d090cd95fff369d55d0fe0d.tar.gz 00:00:09.877 Sending request to url: http://10.211.164.101/packages/spdk_26d44a121d9e45b13d090cd95fff369d55d0fe0d.tar.gz 00:00:09.892 Response Code: HTTP/1.1 200 OK 00:00:09.893 Success: Status code 200 is in the accepted range: 200,404 00:00:09.893 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_26d44a121d9e45b13d090cd95fff369d55d0fe0d.tar.gz 00:00:35.001 [Pipeline] sh 00:00:35.283 + tar --no-same-owner -xf spdk_26d44a121d9e45b13d090cd95fff369d55d0fe0d.tar.gz 00:00:39.489 [Pipeline] sh 00:00:39.771 + git -C spdk log --oneline -n5 00:00:39.771 26d44a121 trace: rename owner to owner_type 00:00:39.771 00918d5c0 trace: change trace_flags_init() to return int 00:00:39.771 dc38e848f trace: make spdk_trace_flags_init() a private function 00:00:39.771 679c3183e lvol: set default timeout to 90.0 in bdev_lvol_create_lvstore 00:00:39.771 93731ac74 rpc: unset default timeout value in arg parse 00:00:39.790 [Pipeline] withCredentials 00:00:39.801 > git --version # timeout=10 00:00:39.814 > git --version # 'git version 2.39.2' 00:00:39.833 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:39.835 [Pipeline] { 00:00:39.845 [Pipeline] retry 00:00:39.847 [Pipeline] { 00:00:39.866 [Pipeline] sh 00:00:40.151 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:40.165 [Pipeline] } 00:00:40.187 [Pipeline] // retry 00:00:40.191 [Pipeline] } 00:00:40.211 [Pipeline] // withCredentials 00:00:40.221 [Pipeline] httpRequest 00:00:40.226 HttpMethod: GET 00:00:40.226 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:40.227 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:40.230 Response Code: HTTP/1.1 200 OK 00:00:40.231 Success: Status code 200 is in the accepted range: 200,404 00:00:40.231 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:41.620 [Pipeline] sh 00:00:41.902 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:44.449 [Pipeline] sh 00:00:44.730 + git -C dpdk log --oneline -n5 00:00:44.730 caf0f5d395 version: 22.11.4 00:00:44.730 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:44.730 dc9c799c7d vhost: fix missing spinlock unlock 00:00:44.730 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:44.730 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:44.741 [Pipeline] } 00:00:44.758 [Pipeline] // stage 00:00:44.766 [Pipeline] stage 00:00:44.768 [Pipeline] { (Prepare) 00:00:44.788 [Pipeline] writeFile 00:00:44.804 [Pipeline] sh 00:00:45.082 + logger -p user.info -t JENKINS-CI 00:00:45.094 [Pipeline] sh 00:00:45.374 + logger -p user.info -t JENKINS-CI 00:00:45.386 [Pipeline] sh 00:00:45.701 + cat autorun-spdk.conf 00:00:45.701 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.701 SPDK_TEST_NVMF=1 00:00:45.701 SPDK_TEST_NVME_CLI=1 00:00:45.701 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.701 SPDK_TEST_NVMF_NICS=e810 00:00:45.701 SPDK_TEST_VFIOUSER=1 00:00:45.701 SPDK_RUN_UBSAN=1 00:00:45.701 NET_TYPE=phy 00:00:45.701 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:45.702 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:45.709 RUN_NIGHTLY=1 00:00:45.713 [Pipeline] readFile 00:00:45.737 [Pipeline] withEnv 00:00:45.739 [Pipeline] { 00:00:45.753 [Pipeline] sh 00:00:46.039 + set -ex 00:00:46.039 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:46.039 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:46.039 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.039 ++ SPDK_TEST_NVMF=1 00:00:46.039 ++ SPDK_TEST_NVME_CLI=1 00:00:46.039 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.039 ++ SPDK_TEST_NVMF_NICS=e810 00:00:46.039 ++ SPDK_TEST_VFIOUSER=1 00:00:46.039 ++ SPDK_RUN_UBSAN=1 00:00:46.039 ++ NET_TYPE=phy 00:00:46.039 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:46.039 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:46.039 ++ RUN_NIGHTLY=1 00:00:46.039 + case $SPDK_TEST_NVMF_NICS in 00:00:46.039 + DRIVERS=ice 00:00:46.039 + [[ tcp == \r\d\m\a ]] 00:00:46.039 + [[ -n ice ]] 00:00:46.039 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:46.607 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:46.607 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:46.607 rmmod: ERROR: Module irdma is not currently loaded 00:00:46.607 rmmod: ERROR: Module i40iw is not currently loaded 00:00:46.607 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:46.607 + true 00:00:46.607 + for D in $DRIVERS 00:00:46.607 + sudo modprobe ice 00:00:46.607 + exit 0 00:00:46.616 [Pipeline] } 00:00:46.634 [Pipeline] // withEnv 00:00:46.639 [Pipeline] } 00:00:46.658 [Pipeline] // stage 00:00:46.667 [Pipeline] catchError 00:00:46.669 [Pipeline] { 00:00:46.684 [Pipeline] timeout 00:00:46.684 Timeout set to expire in 40 min 00:00:46.686 [Pipeline] { 00:00:46.700 [Pipeline] stage 00:00:46.702 [Pipeline] { (Tests) 00:00:46.717 [Pipeline] sh 00:00:47.000 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:47.001 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:47.001 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:47.001 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:47.001 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:47.001 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:47.001 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:47.001 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:47.001 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:47.001 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:47.001 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:47.001 + source /etc/os-release 00:00:47.001 ++ NAME='Fedora Linux' 00:00:47.001 ++ VERSION='38 (Cloud Edition)' 00:00:47.001 ++ ID=fedora 00:00:47.001 ++ VERSION_ID=38 00:00:47.001 ++ VERSION_CODENAME= 00:00:47.001 ++ PLATFORM_ID=platform:f38 00:00:47.001 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:47.001 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:47.001 ++ LOGO=fedora-logo-icon 00:00:47.001 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:47.001 ++ HOME_URL=https://fedoraproject.org/ 00:00:47.001 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:47.001 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:47.001 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:47.001 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:47.001 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:47.001 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:47.001 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:47.001 ++ SUPPORT_END=2024-05-14 00:00:47.001 ++ VARIANT='Cloud Edition' 00:00:47.001 ++ VARIANT_ID=cloud 00:00:47.001 + uname -a 00:00:47.001 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:47.001 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:48.380 Hugepages 00:00:48.380 node hugesize free / total 00:00:48.380 node0 1048576kB 0 / 0 00:00:48.380 node0 2048kB 0 / 0 00:00:48.380 node1 1048576kB 0 / 0 00:00:48.380 node1 2048kB 0 / 0 00:00:48.380 00:00:48.380 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:48.380 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:48.380 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:48.380 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:48.380 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:48.380 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:48.380 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:48.380 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:48.380 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:48.380 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:48.380 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:48.380 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:48.380 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:48.380 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:48.380 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:48.380 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:48.380 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:48.380 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:48.380 + rm -f /tmp/spdk-ld-path 00:00:48.380 + source autorun-spdk.conf 00:00:48.380 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.380 ++ SPDK_TEST_NVMF=1 00:00:48.380 ++ SPDK_TEST_NVME_CLI=1 00:00:48.380 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.380 ++ SPDK_TEST_NVMF_NICS=e810 00:00:48.380 ++ SPDK_TEST_VFIOUSER=1 00:00:48.380 ++ SPDK_RUN_UBSAN=1 00:00:48.380 ++ NET_TYPE=phy 00:00:48.380 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:48.380 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:48.380 ++ RUN_NIGHTLY=1 00:00:48.380 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:48.380 + [[ -n '' ]] 00:00:48.380 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:48.380 + for M in /var/spdk/build-*-manifest.txt 00:00:48.380 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:48.380 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:48.380 + for M in /var/spdk/build-*-manifest.txt 00:00:48.380 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:48.380 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:48.380 ++ uname 00:00:48.380 + [[ Linux == \L\i\n\u\x ]] 00:00:48.380 + sudo dmesg -T 00:00:48.380 + sudo dmesg --clear 00:00:48.380 + dmesg_pid=3083615 00:00:48.380 + [[ Fedora Linux == FreeBSD ]] 00:00:48.380 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:48.380 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:48.380 + sudo dmesg -Tw 00:00:48.380 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:48.380 + [[ -x /usr/src/fio-static/fio ]] 00:00:48.380 + export FIO_BIN=/usr/src/fio-static/fio 00:00:48.380 + FIO_BIN=/usr/src/fio-static/fio 00:00:48.380 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:48.380 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:48.380 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:48.380 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:48.380 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:48.380 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:48.380 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:48.380 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:48.380 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:48.380 Test configuration: 00:00:48.380 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.380 SPDK_TEST_NVMF=1 00:00:48.380 SPDK_TEST_NVME_CLI=1 00:00:48.380 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.380 SPDK_TEST_NVMF_NICS=e810 00:00:48.380 SPDK_TEST_VFIOUSER=1 00:00:48.380 SPDK_RUN_UBSAN=1 00:00:48.380 NET_TYPE=phy 00:00:48.380 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:48.380 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:48.380 RUN_NIGHTLY=1 17:47:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:48.380 17:47:37 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:48.380 17:47:37 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:48.380 17:47:37 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:48.380 17:47:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.380 17:47:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.380 17:47:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.380 17:47:37 -- paths/export.sh@5 -- $ export PATH 00:00:48.380 17:47:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.380 17:47:37 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:48.380 17:47:37 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:48.380 17:47:37 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713196057.XXXXXX 00:00:48.380 17:47:37 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713196057.Zqz8V2 00:00:48.380 17:47:37 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:48.380 17:47:37 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:00:48.380 17:47:37 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:48.380 17:47:37 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:00:48.380 17:47:37 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:48.380 17:47:37 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:48.380 17:47:37 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:48.380 17:47:37 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:48.380 17:47:37 -- common/autotest_common.sh@10 -- $ set +x 00:00:48.380 17:47:37 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:00:48.380 17:47:37 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:48.380 17:47:37 -- pm/common@17 -- $ local monitor 00:00:48.380 17:47:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.380 17:47:37 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3083651 00:00:48.381 17:47:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.381 17:47:37 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3083653 00:00:48.381 17:47:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.381 17:47:37 -- pm/common@21 -- $ date +%s 00:00:48.381 17:47:37 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3083655 00:00:48.381 17:47:37 -- pm/common@21 -- $ date +%s 00:00:48.381 17:47:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.639 17:47:37 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3083659 00:00:48.639 17:47:37 -- pm/common@21 -- $ date +%s 00:00:48.639 17:47:37 -- pm/common@26 -- $ sleep 1 00:00:48.639 17:47:37 -- pm/common@21 -- $ date +%s 00:00:48.639 17:47:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713196057 00:00:48.639 17:47:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713196057 00:00:48.639 17:47:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713196057 00:00:48.639 17:47:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713196057 00:00:48.639 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713196057_collect-cpu-load.pm.log 00:00:48.639 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713196057_collect-vmstat.pm.log 00:00:48.639 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713196057_collect-cpu-temp.pm.log 00:00:48.639 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713196057_collect-bmc-pm.bmc.pm.log 00:00:49.573 17:47:38 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:49.573 17:47:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:49.573 17:47:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:49.573 17:47:38 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:49.573 17:47:38 -- spdk/autobuild.sh@16 -- $ date -u 00:00:49.573 Mon Apr 15 03:47:38 PM UTC 2024 00:00:49.573 17:47:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:49.573 v24.05-pre-385-g26d44a121 00:00:49.573 17:47:38 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:49.573 17:47:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:49.573 17:47:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:49.573 17:47:38 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:49.573 17:47:38 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:49.573 17:47:38 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.831 ************************************ 00:00:49.831 START TEST ubsan 00:00:49.831 ************************************ 00:00:49.831 17:47:38 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:49.831 using ubsan 00:00:49.831 00:00:49.831 real 0m0.000s 00:00:49.831 user 0m0.000s 00:00:49.831 sys 0m0.000s 00:00:49.831 17:47:38 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:49.831 17:47:38 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.831 ************************************ 00:00:49.831 END TEST ubsan 00:00:49.831 ************************************ 00:00:49.831 17:47:38 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:00:49.831 17:47:38 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:49.831 17:47:38 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:49.831 17:47:38 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:00:49.831 17:47:38 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:49.831 17:47:38 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.831 ************************************ 00:00:49.831 START TEST build_native_dpdk 00:00:49.831 ************************************ 00:00:49.831 17:47:38 -- common/autotest_common.sh@1111 -- $ _build_native_dpdk 00:00:49.831 17:47:38 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:49.831 17:47:38 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:49.831 17:47:38 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:49.831 17:47:38 -- common/autobuild_common.sh@51 -- $ local compiler 00:00:49.831 17:47:38 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:49.831 17:47:38 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:49.831 17:47:38 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:49.831 17:47:38 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:49.831 17:47:38 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:49.831 17:47:38 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:49.831 17:47:38 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:49.831 17:47:38 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:49.831 17:47:38 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:50.090 17:47:38 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:50.090 17:47:38 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:50.090 17:47:38 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:50.090 17:47:38 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:50.090 17:47:38 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:00:50.090 17:47:38 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:50.090 17:47:38 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:00:50.090 caf0f5d395 version: 22.11.4 00:00:50.090 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:50.090 dc9c799c7d vhost: fix missing spinlock unlock 00:00:50.090 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:50.090 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:50.090 17:47:38 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:50.090 17:47:38 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:50.090 17:47:38 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:00:50.090 17:47:38 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:50.090 17:47:38 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:50.090 17:47:38 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:50.090 17:47:38 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:50.090 17:47:38 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:50.090 17:47:38 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:50.090 17:47:38 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:50.090 17:47:38 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:50.090 17:47:38 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:50.090 17:47:38 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:50.090 17:47:38 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:00:50.090 17:47:38 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:50.090 17:47:38 -- common/autobuild_common.sh@168 -- $ uname -s 00:00:50.090 17:47:38 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:00:50.090 17:47:38 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:00:50.090 17:47:38 -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:00:50.090 17:47:38 -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:00:50.090 17:47:38 -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:00:50.090 17:47:38 -- scripts/common.sh@333 -- $ IFS=.-: 00:00:50.090 17:47:38 -- scripts/common.sh@333 -- $ read -ra ver1 00:00:50.090 17:47:38 -- scripts/common.sh@334 -- $ IFS=.-: 00:00:50.090 17:47:38 -- scripts/common.sh@334 -- $ read -ra ver2 00:00:50.090 17:47:38 -- scripts/common.sh@335 -- $ local 'op=<' 00:00:50.090 17:47:38 -- scripts/common.sh@337 -- $ ver1_l=3 00:00:50.090 17:47:38 -- scripts/common.sh@338 -- $ ver2_l=3 00:00:50.090 17:47:38 -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:00:50.090 17:47:38 -- scripts/common.sh@341 -- $ case "$op" in 00:00:50.090 17:47:38 -- scripts/common.sh@342 -- $ : 1 00:00:50.090 17:47:38 -- scripts/common.sh@361 -- $ (( v = 0 )) 00:00:50.090 17:47:38 -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:50.090 17:47:38 -- scripts/common.sh@362 -- $ decimal 22 00:00:50.090 17:47:38 -- scripts/common.sh@350 -- $ local d=22 00:00:50.090 17:47:38 -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:00:50.090 17:47:38 -- scripts/common.sh@352 -- $ echo 22 00:00:50.090 17:47:38 -- scripts/common.sh@362 -- $ ver1[v]=22 00:00:50.090 17:47:38 -- scripts/common.sh@363 -- $ decimal 21 00:00:50.090 17:47:38 -- scripts/common.sh@350 -- $ local d=21 00:00:50.090 17:47:38 -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:50.090 17:47:38 -- scripts/common.sh@352 -- $ echo 21 00:00:50.090 17:47:38 -- scripts/common.sh@363 -- $ ver2[v]=21 00:00:50.090 17:47:38 -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:50.090 17:47:38 -- scripts/common.sh@364 -- $ return 1 00:00:50.090 17:47:38 -- common/autobuild_common.sh@173 -- $ patch -p1 00:00:50.090 patching file config/rte_config.h 00:00:50.090 Hunk #1 succeeded at 60 (offset 1 line). 00:00:50.090 17:47:38 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:00:50.090 17:47:38 -- common/autobuild_common.sh@178 -- $ uname -s 00:00:50.090 17:47:38 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:00:50.090 17:47:38 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:00:50.090 17:47:38 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:56.651 The Meson build system 00:00:56.651 Version: 1.3.1 00:00:56.651 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:56.651 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:00:56.651 Build type: native build 00:00:56.651 Program cat found: YES (/usr/bin/cat) 00:00:56.651 Project name: DPDK 00:00:56.651 Project version: 22.11.4 00:00:56.651 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:56.651 C linker for the host machine: gcc ld.bfd 2.39-16 00:00:56.651 Host machine cpu family: x86_64 00:00:56.651 Host machine cpu: x86_64 00:00:56.651 Message: ## Building in Developer Mode ## 00:00:56.651 Program pkg-config found: YES (/usr/bin/pkg-config) 00:00:56.651 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:00:56.651 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:00:56.651 Program objdump found: YES (/usr/bin/objdump) 00:00:56.651 Program python3 found: YES (/usr/bin/python3) 00:00:56.651 Program cat found: YES (/usr/bin/cat) 00:00:56.651 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:00:56.651 Checking for size of "void *" : 8 00:00:56.651 Checking for size of "void *" : 8 (cached) 00:00:56.651 Library m found: YES 00:00:56.651 Library numa found: YES 00:00:56.651 Has header "numaif.h" : YES 00:00:56.651 Library fdt found: NO 00:00:56.651 Library execinfo found: NO 00:00:56.651 Has header "execinfo.h" : YES 00:00:56.651 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:56.651 Run-time dependency libarchive found: NO (tried pkgconfig) 00:00:56.651 Run-time dependency libbsd found: NO (tried pkgconfig) 00:00:56.651 Run-time dependency jansson found: NO (tried pkgconfig) 00:00:56.651 Run-time dependency openssl found: YES 3.0.9 00:00:56.651 Run-time dependency libpcap found: YES 1.10.4 00:00:56.651 Has header "pcap.h" with dependency libpcap: YES 00:00:56.651 Compiler for C supports arguments -Wcast-qual: YES 00:00:56.651 Compiler for C supports arguments -Wdeprecated: YES 00:00:56.651 Compiler for C supports arguments -Wformat: YES 00:00:56.651 Compiler for C supports arguments -Wformat-nonliteral: NO 00:00:56.651 Compiler for C supports arguments -Wformat-security: NO 00:00:56.651 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:56.651 Compiler for C supports arguments -Wmissing-prototypes: YES 00:00:56.651 Compiler for C supports arguments -Wnested-externs: YES 00:00:56.651 Compiler for C supports arguments -Wold-style-definition: YES 00:00:56.651 Compiler for C supports arguments -Wpointer-arith: YES 00:00:56.651 Compiler for C supports arguments -Wsign-compare: YES 00:00:56.651 Compiler for C supports arguments -Wstrict-prototypes: YES 00:00:56.651 Compiler for C supports arguments -Wundef: YES 00:00:56.651 Compiler for C supports arguments -Wwrite-strings: YES 00:00:56.651 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:00:56.651 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:00:56.652 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:56.652 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:00:56.652 Compiler for C supports arguments -mavx512f: YES 00:00:56.652 Checking if "AVX512 checking" compiles: YES 00:00:56.652 Fetching value of define "__SSE4_2__" : 1 00:00:56.652 Fetching value of define "__AES__" : 1 00:00:56.652 Fetching value of define "__AVX__" : 1 00:00:56.652 Fetching value of define "__AVX2__" : (undefined) 00:00:56.652 Fetching value of define "__AVX512BW__" : (undefined) 00:00:56.652 Fetching value of define "__AVX512CD__" : (undefined) 00:00:56.652 Fetching value of define "__AVX512DQ__" : (undefined) 00:00:56.652 Fetching value of define "__AVX512F__" : (undefined) 00:00:56.652 Fetching value of define "__AVX512VL__" : (undefined) 00:00:56.652 Fetching value of define "__PCLMUL__" : 1 00:00:56.652 Fetching value of define "__RDRND__" : 1 00:00:56.652 Fetching value of define "__RDSEED__" : (undefined) 00:00:56.652 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:00:56.652 Compiler for C supports arguments -Wno-format-truncation: YES 00:00:56.652 Message: lib/kvargs: Defining dependency "kvargs" 00:00:56.652 Message: lib/telemetry: Defining dependency "telemetry" 00:00:56.652 Checking for function "getentropy" : YES 00:00:56.652 Message: lib/eal: Defining dependency "eal" 00:00:56.652 Message: lib/ring: Defining dependency "ring" 00:00:56.652 Message: lib/rcu: Defining dependency "rcu" 00:00:56.652 Message: lib/mempool: Defining dependency "mempool" 00:00:56.652 Message: lib/mbuf: Defining dependency "mbuf" 00:00:56.652 Fetching value of define "__PCLMUL__" : 1 (cached) 00:00:56.652 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:56.652 Compiler for C supports arguments -mpclmul: YES 00:00:56.652 Compiler for C supports arguments -maes: YES 00:00:56.652 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:56.652 Compiler for C supports arguments -mavx512bw: YES 00:00:56.652 Compiler for C supports arguments -mavx512dq: YES 00:00:56.652 Compiler for C supports arguments -mavx512vl: YES 00:00:56.652 Compiler for C supports arguments -mvpclmulqdq: YES 00:00:56.652 Compiler for C supports arguments -mavx2: YES 00:00:56.652 Compiler for C supports arguments -mavx: YES 00:00:56.652 Message: lib/net: Defining dependency "net" 00:00:56.652 Message: lib/meter: Defining dependency "meter" 00:00:56.652 Message: lib/ethdev: Defining dependency "ethdev" 00:00:56.652 Message: lib/pci: Defining dependency "pci" 00:00:56.652 Message: lib/cmdline: Defining dependency "cmdline" 00:00:56.652 Message: lib/metrics: Defining dependency "metrics" 00:00:56.652 Message: lib/hash: Defining dependency "hash" 00:00:56.652 Message: lib/timer: Defining dependency "timer" 00:00:56.652 Fetching value of define "__AVX2__" : (undefined) (cached) 00:00:56.652 Compiler for C supports arguments -mavx2: YES (cached) 00:00:56.652 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:56.652 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:00:56.652 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:00:56.652 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:00:56.652 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:00:56.652 Message: lib/acl: Defining dependency "acl" 00:00:56.652 Message: lib/bbdev: Defining dependency "bbdev" 00:00:56.652 Message: lib/bitratestats: Defining dependency "bitratestats" 00:00:56.652 Run-time dependency libelf found: YES 0.190 00:00:56.652 Message: lib/bpf: Defining dependency "bpf" 00:00:56.652 Message: lib/cfgfile: Defining dependency "cfgfile" 00:00:56.652 Message: lib/compressdev: Defining dependency "compressdev" 00:00:56.652 Message: lib/cryptodev: Defining dependency "cryptodev" 00:00:56.652 Message: lib/distributor: Defining dependency "distributor" 00:00:56.652 Message: lib/efd: Defining dependency "efd" 00:00:56.652 Message: lib/eventdev: Defining dependency "eventdev" 00:00:56.652 Message: lib/gpudev: Defining dependency "gpudev" 00:00:56.652 Message: lib/gro: Defining dependency "gro" 00:00:56.652 Message: lib/gso: Defining dependency "gso" 00:00:56.652 Message: lib/ip_frag: Defining dependency "ip_frag" 00:00:56.652 Message: lib/jobstats: Defining dependency "jobstats" 00:00:56.652 Message: lib/latencystats: Defining dependency "latencystats" 00:00:56.652 Message: lib/lpm: Defining dependency "lpm" 00:00:56.652 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:56.652 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:00:56.652 Fetching value of define "__AVX512IFMA__" : (undefined) 00:00:56.652 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:00:56.652 Message: lib/member: Defining dependency "member" 00:00:56.652 Message: lib/pcapng: Defining dependency "pcapng" 00:00:56.652 Compiler for C supports arguments -Wno-cast-qual: YES 00:00:56.652 Message: lib/power: Defining dependency "power" 00:00:56.652 Message: lib/rawdev: Defining dependency "rawdev" 00:00:56.652 Message: lib/regexdev: Defining dependency "regexdev" 00:00:56.652 Message: lib/dmadev: Defining dependency "dmadev" 00:00:56.652 Message: lib/rib: Defining dependency "rib" 00:00:56.652 Message: lib/reorder: Defining dependency "reorder" 00:00:56.652 Message: lib/sched: Defining dependency "sched" 00:00:56.652 Message: lib/security: Defining dependency "security" 00:00:56.652 Message: lib/stack: Defining dependency "stack" 00:00:56.652 Has header "linux/userfaultfd.h" : YES 00:00:56.652 Message: lib/vhost: Defining dependency "vhost" 00:00:56.652 Message: lib/ipsec: Defining dependency "ipsec" 00:00:56.652 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:56.652 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:00:56.652 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:00:56.652 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:56.652 Message: lib/fib: Defining dependency "fib" 00:00:56.652 Message: lib/port: Defining dependency "port" 00:00:56.652 Message: lib/pdump: Defining dependency "pdump" 00:00:56.652 Message: lib/table: Defining dependency "table" 00:00:56.652 Message: lib/pipeline: Defining dependency "pipeline" 00:00:56.652 Message: lib/graph: Defining dependency "graph" 00:00:56.652 Message: lib/node: Defining dependency "node" 00:00:56.652 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:00:56.652 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:00:56.652 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:00:56.652 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:00:56.652 Compiler for C supports arguments -Wno-sign-compare: YES 00:00:56.652 Compiler for C supports arguments -Wno-unused-value: YES 00:00:58.553 Compiler for C supports arguments -Wno-format: YES 00:00:58.553 Compiler for C supports arguments -Wno-format-security: YES 00:00:58.553 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:00:58.553 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:00:58.553 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:00:58.553 Compiler for C supports arguments -Wno-unused-parameter: YES 00:00:58.553 Fetching value of define "__AVX2__" : (undefined) (cached) 00:00:58.553 Compiler for C supports arguments -mavx2: YES (cached) 00:00:58.553 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:58.553 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:58.553 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:58.553 Compiler for C supports arguments -march=skylake-avx512: YES 00:00:58.553 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:00:58.553 Program doxygen found: YES (/usr/bin/doxygen) 00:00:58.553 Configuring doxy-api.conf using configuration 00:00:58.553 Program sphinx-build found: NO 00:00:58.553 Configuring rte_build_config.h using configuration 00:00:58.553 Message: 00:00:58.553 ================= 00:00:58.553 Applications Enabled 00:00:58.553 ================= 00:00:58.553 00:00:58.553 apps: 00:00:58.553 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:00:58.553 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:00:58.553 test-security-perf, 00:00:58.553 00:00:58.553 Message: 00:00:58.553 ================= 00:00:58.553 Libraries Enabled 00:00:58.553 ================= 00:00:58.553 00:00:58.553 libs: 00:00:58.553 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:00:58.553 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:00:58.553 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:00:58.553 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:00:58.553 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:00:58.553 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:00:58.553 table, pipeline, graph, node, 00:00:58.553 00:00:58.553 Message: 00:00:58.553 =============== 00:00:58.553 Drivers Enabled 00:00:58.553 =============== 00:00:58.553 00:00:58.553 common: 00:00:58.553 00:00:58.553 bus: 00:00:58.553 pci, vdev, 00:00:58.553 mempool: 00:00:58.553 ring, 00:00:58.553 dma: 00:00:58.553 00:00:58.553 net: 00:00:58.553 i40e, 00:00:58.553 raw: 00:00:58.553 00:00:58.553 crypto: 00:00:58.553 00:00:58.553 compress: 00:00:58.553 00:00:58.553 regex: 00:00:58.553 00:00:58.553 vdpa: 00:00:58.553 00:00:58.553 event: 00:00:58.553 00:00:58.553 baseband: 00:00:58.553 00:00:58.553 gpu: 00:00:58.553 00:00:58.553 00:00:58.553 Message: 00:00:58.553 ================= 00:00:58.553 Content Skipped 00:00:58.553 ================= 00:00:58.553 00:00:58.553 apps: 00:00:58.553 00:00:58.553 libs: 00:00:58.553 kni: explicitly disabled via build config (deprecated lib) 00:00:58.553 flow_classify: explicitly disabled via build config (deprecated lib) 00:00:58.553 00:00:58.553 drivers: 00:00:58.553 common/cpt: not in enabled drivers build config 00:00:58.553 common/dpaax: not in enabled drivers build config 00:00:58.553 common/iavf: not in enabled drivers build config 00:00:58.553 common/idpf: not in enabled drivers build config 00:00:58.553 common/mvep: not in enabled drivers build config 00:00:58.553 common/octeontx: not in enabled drivers build config 00:00:58.553 bus/auxiliary: not in enabled drivers build config 00:00:58.553 bus/dpaa: not in enabled drivers build config 00:00:58.553 bus/fslmc: not in enabled drivers build config 00:00:58.553 bus/ifpga: not in enabled drivers build config 00:00:58.553 bus/vmbus: not in enabled drivers build config 00:00:58.553 common/cnxk: not in enabled drivers build config 00:00:58.553 common/mlx5: not in enabled drivers build config 00:00:58.553 common/qat: not in enabled drivers build config 00:00:58.553 common/sfc_efx: not in enabled drivers build config 00:00:58.553 mempool/bucket: not in enabled drivers build config 00:00:58.553 mempool/cnxk: not in enabled drivers build config 00:00:58.553 mempool/dpaa: not in enabled drivers build config 00:00:58.553 mempool/dpaa2: not in enabled drivers build config 00:00:58.553 mempool/octeontx: not in enabled drivers build config 00:00:58.553 mempool/stack: not in enabled drivers build config 00:00:58.553 dma/cnxk: not in enabled drivers build config 00:00:58.553 dma/dpaa: not in enabled drivers build config 00:00:58.553 dma/dpaa2: not in enabled drivers build config 00:00:58.553 dma/hisilicon: not in enabled drivers build config 00:00:58.553 dma/idxd: not in enabled drivers build config 00:00:58.553 dma/ioat: not in enabled drivers build config 00:00:58.553 dma/skeleton: not in enabled drivers build config 00:00:58.553 net/af_packet: not in enabled drivers build config 00:00:58.553 net/af_xdp: not in enabled drivers build config 00:00:58.554 net/ark: not in enabled drivers build config 00:00:58.554 net/atlantic: not in enabled drivers build config 00:00:58.554 net/avp: not in enabled drivers build config 00:00:58.554 net/axgbe: not in enabled drivers build config 00:00:58.554 net/bnx2x: not in enabled drivers build config 00:00:58.554 net/bnxt: not in enabled drivers build config 00:00:58.554 net/bonding: not in enabled drivers build config 00:00:58.554 net/cnxk: not in enabled drivers build config 00:00:58.554 net/cxgbe: not in enabled drivers build config 00:00:58.554 net/dpaa: not in enabled drivers build config 00:00:58.554 net/dpaa2: not in enabled drivers build config 00:00:58.554 net/e1000: not in enabled drivers build config 00:00:58.554 net/ena: not in enabled drivers build config 00:00:58.554 net/enetc: not in enabled drivers build config 00:00:58.554 net/enetfec: not in enabled drivers build config 00:00:58.554 net/enic: not in enabled drivers build config 00:00:58.554 net/failsafe: not in enabled drivers build config 00:00:58.554 net/fm10k: not in enabled drivers build config 00:00:58.554 net/gve: not in enabled drivers build config 00:00:58.554 net/hinic: not in enabled drivers build config 00:00:58.554 net/hns3: not in enabled drivers build config 00:00:58.554 net/iavf: not in enabled drivers build config 00:00:58.554 net/ice: not in enabled drivers build config 00:00:58.554 net/idpf: not in enabled drivers build config 00:00:58.554 net/igc: not in enabled drivers build config 00:00:58.554 net/ionic: not in enabled drivers build config 00:00:58.554 net/ipn3ke: not in enabled drivers build config 00:00:58.554 net/ixgbe: not in enabled drivers build config 00:00:58.554 net/kni: not in enabled drivers build config 00:00:58.554 net/liquidio: not in enabled drivers build config 00:00:58.554 net/mana: not in enabled drivers build config 00:00:58.554 net/memif: not in enabled drivers build config 00:00:58.554 net/mlx4: not in enabled drivers build config 00:00:58.554 net/mlx5: not in enabled drivers build config 00:00:58.554 net/mvneta: not in enabled drivers build config 00:00:58.554 net/mvpp2: not in enabled drivers build config 00:00:58.554 net/netvsc: not in enabled drivers build config 00:00:58.554 net/nfb: not in enabled drivers build config 00:00:58.554 net/nfp: not in enabled drivers build config 00:00:58.554 net/ngbe: not in enabled drivers build config 00:00:58.554 net/null: not in enabled drivers build config 00:00:58.554 net/octeontx: not in enabled drivers build config 00:00:58.554 net/octeon_ep: not in enabled drivers build config 00:00:58.554 net/pcap: not in enabled drivers build config 00:00:58.554 net/pfe: not in enabled drivers build config 00:00:58.554 net/qede: not in enabled drivers build config 00:00:58.554 net/ring: not in enabled drivers build config 00:00:58.554 net/sfc: not in enabled drivers build config 00:00:58.554 net/softnic: not in enabled drivers build config 00:00:58.554 net/tap: not in enabled drivers build config 00:00:58.554 net/thunderx: not in enabled drivers build config 00:00:58.554 net/txgbe: not in enabled drivers build config 00:00:58.554 net/vdev_netvsc: not in enabled drivers build config 00:00:58.554 net/vhost: not in enabled drivers build config 00:00:58.554 net/virtio: not in enabled drivers build config 00:00:58.554 net/vmxnet3: not in enabled drivers build config 00:00:58.554 raw/cnxk_bphy: not in enabled drivers build config 00:00:58.554 raw/cnxk_gpio: not in enabled drivers build config 00:00:58.554 raw/dpaa2_cmdif: not in enabled drivers build config 00:00:58.554 raw/ifpga: not in enabled drivers build config 00:00:58.554 raw/ntb: not in enabled drivers build config 00:00:58.554 raw/skeleton: not in enabled drivers build config 00:00:58.554 crypto/armv8: not in enabled drivers build config 00:00:58.554 crypto/bcmfs: not in enabled drivers build config 00:00:58.554 crypto/caam_jr: not in enabled drivers build config 00:00:58.554 crypto/ccp: not in enabled drivers build config 00:00:58.554 crypto/cnxk: not in enabled drivers build config 00:00:58.554 crypto/dpaa_sec: not in enabled drivers build config 00:00:58.554 crypto/dpaa2_sec: not in enabled drivers build config 00:00:58.554 crypto/ipsec_mb: not in enabled drivers build config 00:00:58.554 crypto/mlx5: not in enabled drivers build config 00:00:58.554 crypto/mvsam: not in enabled drivers build config 00:00:58.554 crypto/nitrox: not in enabled drivers build config 00:00:58.554 crypto/null: not in enabled drivers build config 00:00:58.554 crypto/octeontx: not in enabled drivers build config 00:00:58.554 crypto/openssl: not in enabled drivers build config 00:00:58.554 crypto/scheduler: not in enabled drivers build config 00:00:58.554 crypto/uadk: not in enabled drivers build config 00:00:58.554 crypto/virtio: not in enabled drivers build config 00:00:58.554 compress/isal: not in enabled drivers build config 00:00:58.554 compress/mlx5: not in enabled drivers build config 00:00:58.554 compress/octeontx: not in enabled drivers build config 00:00:58.554 compress/zlib: not in enabled drivers build config 00:00:58.554 regex/mlx5: not in enabled drivers build config 00:00:58.554 regex/cn9k: not in enabled drivers build config 00:00:58.554 vdpa/ifc: not in enabled drivers build config 00:00:58.554 vdpa/mlx5: not in enabled drivers build config 00:00:58.554 vdpa/sfc: not in enabled drivers build config 00:00:58.554 event/cnxk: not in enabled drivers build config 00:00:58.554 event/dlb2: not in enabled drivers build config 00:00:58.554 event/dpaa: not in enabled drivers build config 00:00:58.554 event/dpaa2: not in enabled drivers build config 00:00:58.554 event/dsw: not in enabled drivers build config 00:00:58.554 event/opdl: not in enabled drivers build config 00:00:58.554 event/skeleton: not in enabled drivers build config 00:00:58.554 event/sw: not in enabled drivers build config 00:00:58.554 event/octeontx: not in enabled drivers build config 00:00:58.554 baseband/acc: not in enabled drivers build config 00:00:58.554 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:00:58.554 baseband/fpga_lte_fec: not in enabled drivers build config 00:00:58.554 baseband/la12xx: not in enabled drivers build config 00:00:58.554 baseband/null: not in enabled drivers build config 00:00:58.554 baseband/turbo_sw: not in enabled drivers build config 00:00:58.554 gpu/cuda: not in enabled drivers build config 00:00:58.554 00:00:58.554 00:00:58.554 Build targets in project: 316 00:00:58.554 00:00:58.554 DPDK 22.11.4 00:00:58.554 00:00:58.554 User defined options 00:00:58.554 libdir : lib 00:00:58.554 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:58.554 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:00:58.554 c_link_args : 00:00:58.554 enable_docs : false 00:00:58.554 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:58.554 enable_kmods : false 00:00:58.554 machine : native 00:00:58.554 tests : false 00:00:58.554 00:00:58.554 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:58.554 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:00:58.820 17:47:47 -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:00:58.820 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:00:59.081 [1/745] Generating lib/rte_telemetry_mingw with a custom command 00:00:59.081 [2/745] Generating lib/rte_kvargs_def with a custom command 00:00:59.081 [3/745] Generating lib/rte_kvargs_mingw with a custom command 00:00:59.081 [4/745] Generating lib/rte_telemetry_def with a custom command 00:00:59.081 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:00:59.081 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:00:59.081 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:00:59.081 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:00:59.081 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:00:59.082 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:00:59.082 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:00:59.082 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:00:59.082 [13/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:00:59.082 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:00:59.082 [15/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:00:59.082 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:00:59.082 [17/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:00:59.082 [18/745] Linking static target lib/librte_kvargs.a 00:00:59.082 [19/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:00:59.082 [20/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:00:59.082 [21/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:00:59.082 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:00:59.082 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:00:59.082 [24/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:00:59.082 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:00:59.082 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:00:59.082 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:00:59.082 [28/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:00:59.082 [29/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:00:59.344 [30/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:00:59.344 [31/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:00:59.344 [32/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:00:59.344 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:00:59.344 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:00:59.344 [35/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:00:59.344 [36/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:00:59.344 [37/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:00:59.344 [38/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:00:59.344 [39/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:00:59.344 [40/745] Generating lib/rte_eal_def with a custom command 00:00:59.344 [41/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:00:59.344 [42/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:00:59.344 [43/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:00:59.344 [44/745] Generating lib/rte_eal_mingw with a custom command 00:00:59.344 [45/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:00:59.344 [46/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:00:59.344 [47/745] Generating lib/rte_ring_def with a custom command 00:00:59.344 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:00:59.344 [49/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:00:59.344 [50/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:00:59.344 [51/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:00:59.344 [52/745] Generating lib/rte_rcu_mingw with a custom command 00:00:59.344 [53/745] Generating lib/rte_ring_mingw with a custom command 00:00:59.344 [54/745] Generating lib/rte_rcu_def with a custom command 00:00:59.344 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:00:59.344 [56/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:00:59.344 [57/745] Generating lib/rte_mempool_mingw with a custom command 00:00:59.344 [58/745] Generating lib/rte_mempool_def with a custom command 00:00:59.344 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:00:59.344 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:00:59.344 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:00:59.344 [62/745] Generating lib/rte_mbuf_mingw with a custom command 00:00:59.344 [63/745] Generating lib/rte_mbuf_def with a custom command 00:00:59.344 [64/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:00:59.344 [65/745] Generating lib/rte_net_mingw with a custom command 00:00:59.344 [66/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:00:59.344 [67/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:00:59.344 [68/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:00:59.344 [69/745] Generating lib/rte_net_def with a custom command 00:00:59.344 [70/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:00:59.344 [71/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:00:59.344 [72/745] Generating lib/rte_meter_mingw with a custom command 00:00:59.344 [73/745] Generating lib/rte_meter_def with a custom command 00:00:59.344 [74/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:00:59.344 [75/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:00:59.344 [76/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:00:59.344 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:00:59.608 [78/745] Generating lib/rte_ethdev_def with a custom command 00:00:59.608 [79/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.608 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:00:59.608 [81/745] Linking static target lib/librte_ring.a 00:00:59.608 [82/745] Generating lib/rte_ethdev_mingw with a custom command 00:00:59.608 [83/745] Linking target lib/librte_kvargs.so.23.0 00:00:59.608 [84/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:00:59.608 [85/745] Generating lib/rte_pci_def with a custom command 00:00:59.608 [86/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:00:59.608 [87/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:00:59.608 [88/745] Linking static target lib/librte_meter.a 00:00:59.608 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:00:59.608 [90/745] Generating lib/rte_pci_mingw with a custom command 00:00:59.866 [91/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:00:59.866 [92/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:00:59.866 [93/745] Linking static target lib/librte_pci.a 00:00:59.866 [94/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:00:59.866 [95/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:00:59.866 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:00:59.866 [97/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:00:59.866 [98/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:00:59.866 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.866 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:00:59.866 [101/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:00.126 [102/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.126 [103/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:00.126 [104/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.126 [105/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:00.126 [106/745] Generating lib/rte_cmdline_def with a custom command 00:01:00.126 [107/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:00.126 [108/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:00.126 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:00.126 [110/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:00.126 [111/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:00.126 [112/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:00.126 [113/745] Linking static target lib/librte_telemetry.a 00:01:00.126 [114/745] Generating lib/rte_metrics_def with a custom command 00:01:00.126 [115/745] Generating lib/rte_metrics_mingw with a custom command 00:01:00.126 [116/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:00.126 [117/745] Generating lib/rte_hash_def with a custom command 00:01:00.126 [118/745] Generating lib/rte_hash_mingw with a custom command 00:01:00.126 [119/745] Generating lib/rte_timer_mingw with a custom command 00:01:00.126 [120/745] Generating lib/rte_timer_def with a custom command 00:01:00.126 [121/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:00.126 [122/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:00.401 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:00.401 [124/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:00.401 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:00.401 [126/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:00.401 [127/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:00.401 [128/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:00.401 [129/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:00.676 [130/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:00.676 [131/745] Generating lib/rte_acl_def with a custom command 00:01:00.676 [132/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:00.676 [133/745] Generating lib/rte_acl_mingw with a custom command 00:01:00.677 [134/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:00.677 [135/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:00.677 [136/745] Generating lib/rte_bbdev_def with a custom command 00:01:00.677 [137/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:00.677 [138/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:00.677 [139/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:00.677 [140/745] Generating lib/rte_bitratestats_def with a custom command 00:01:00.677 [141/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:00.677 [142/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:00.677 [143/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.677 [144/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:00.677 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:00.677 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:00.677 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:00.677 [148/745] Linking target lib/librte_telemetry.so.23.0 00:01:00.677 [149/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:00.677 [150/745] Generating lib/rte_bpf_def with a custom command 00:01:00.943 [151/745] Generating lib/rte_bpf_mingw with a custom command 00:01:00.943 [152/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:00.943 [153/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:00.943 [154/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:00.943 [155/745] Generating lib/rte_cfgfile_def with a custom command 00:01:00.943 [156/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:00.943 [157/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:00.943 [158/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:00.943 [159/745] Generating lib/rte_compressdev_def with a custom command 00:01:00.943 [160/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:00.943 [161/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:00.943 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:00.943 [163/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:00.943 [164/745] Generating lib/rte_cryptodev_def with a custom command 00:01:00.943 [165/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:00.943 [166/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:00.943 [167/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:00.943 [168/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:00.943 [169/745] Linking static target lib/librte_rcu.a 00:01:00.943 [170/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:00.943 [171/745] Linking static target lib/librte_timer.a 00:01:00.943 [172/745] Linking static target lib/librte_cmdline.a 00:01:00.943 [173/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:00.943 [174/745] Generating lib/rte_distributor_def with a custom command 00:01:01.201 [175/745] Generating lib/rte_distributor_mingw with a custom command 00:01:01.201 [176/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:01.201 [177/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:01.201 [178/745] Linking static target lib/librte_net.a 00:01:01.201 [179/745] Generating lib/rte_efd_def with a custom command 00:01:01.201 [180/745] Generating lib/rte_efd_mingw with a custom command 00:01:01.201 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:01.201 [182/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:01.201 [183/745] Linking static target lib/librte_metrics.a 00:01:01.465 [184/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:01.465 [185/745] Linking static target lib/librte_cfgfile.a 00:01:01.465 [186/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:01.465 [187/745] Linking static target lib/librte_mempool.a 00:01:01.465 [188/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.465 [189/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:01.724 [190/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:01.724 [191/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.724 [192/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.724 [193/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:01.724 [194/745] Generating lib/rte_eventdev_def with a custom command 00:01:01.724 [195/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:01.724 [196/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:01.724 [197/745] Generating lib/rte_gpudev_def with a custom command 00:01:01.724 [198/745] Linking static target lib/librte_eal.a 00:01:01.724 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:01.724 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:01.724 [201/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:01.986 [202/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:01.986 [203/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.986 [204/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:01.986 [205/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:01.986 [206/745] Linking static target lib/librte_bitratestats.a 00:01:01.986 [207/745] Generating lib/rte_gro_def with a custom command 00:01:01.986 [208/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.986 [209/745] Generating lib/rte_gro_mingw with a custom command 00:01:01.986 [210/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:01.986 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:01.986 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:02.249 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:02.249 [214/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:02.249 [215/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:02.249 [216/745] Generating lib/rte_gso_def with a custom command 00:01:02.249 [217/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.249 [218/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:02.249 [219/745] Generating lib/rte_gso_mingw with a custom command 00:01:02.249 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:02.249 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:02.508 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:02.508 [223/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:02.508 [224/745] Generating lib/rte_ip_frag_def with a custom command 00:01:02.508 [225/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.508 [226/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:02.508 [227/745] Linking static target lib/librte_bbdev.a 00:01:02.508 [228/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:02.508 [229/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:02.508 [230/745] Generating lib/rte_jobstats_def with a custom command 00:01:02.508 [231/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:02.508 [232/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.508 [233/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:02.508 [234/745] Generating lib/rte_latencystats_def with a custom command 00:01:02.508 [235/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:02.769 [236/745] Generating lib/rte_lpm_mingw with a custom command 00:01:02.769 [237/745] Generating lib/rte_lpm_def with a custom command 00:01:02.769 [238/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:02.769 [239/745] Linking static target lib/librte_compressdev.a 00:01:02.769 [240/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:02.769 [241/745] Linking static target lib/librte_jobstats.a 00:01:02.769 [242/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:02.769 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:03.039 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:03.039 [245/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:03.039 [246/745] Linking static target lib/librte_distributor.a 00:01:03.039 [247/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:03.039 [248/745] Generating lib/rte_member_def with a custom command 00:01:03.301 [249/745] Generating lib/rte_member_mingw with a custom command 00:01:03.301 [250/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.301 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:03.301 [252/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:03.301 [253/745] Generating lib/rte_pcapng_def with a custom command 00:01:03.301 [254/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:03.301 [255/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:03.301 [256/745] Linking static target lib/librte_bpf.a 00:01:03.301 [257/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:03.301 [258/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:03.301 [259/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:03.301 [260/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:03.301 [261/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.301 [262/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:03.562 [263/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.562 [264/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:03.562 [265/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:03.562 [266/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:03.562 [267/745] Linking static target lib/librte_gpudev.a 00:01:03.562 [268/745] Generating lib/rte_power_def with a custom command 00:01:03.562 [269/745] Generating lib/rte_power_mingw with a custom command 00:01:03.562 [270/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:03.562 [271/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:03.562 [272/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:03.562 [273/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:03.562 [274/745] Generating lib/rte_rawdev_def with a custom command 00:01:03.562 [275/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:03.562 [276/745] Linking static target lib/librte_gro.a 00:01:03.562 [277/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:03.562 [278/745] Generating lib/rte_regexdev_def with a custom command 00:01:03.562 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:03.562 [280/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:03.562 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:03.562 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:03.824 [283/745] Generating lib/rte_rib_mingw with a custom command 00:01:03.824 [284/745] Generating lib/rte_rib_def with a custom command 00:01:03.824 [285/745] Generating lib/rte_reorder_def with a custom command 00:01:03.824 [286/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:03.824 [287/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.824 [288/745] Generating lib/rte_reorder_mingw with a custom command 00:01:03.824 [289/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:04.081 [290/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:04.081 [291/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.081 [292/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:04.081 [293/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:04.081 [294/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:04.081 [295/745] Generating lib/rte_sched_def with a custom command 00:01:04.081 [296/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:04.081 [297/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:04.081 [298/745] Generating lib/rte_sched_mingw with a custom command 00:01:04.081 [299/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:04.081 [300/745] Generating lib/rte_security_def with a custom command 00:01:04.081 [301/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:04.081 [302/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.081 [303/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:04.081 [304/745] Linking static target lib/librte_latencystats.a 00:01:04.081 [305/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:04.081 [306/745] Generating lib/rte_security_mingw with a custom command 00:01:04.081 [307/745] Generating lib/rte_stack_def with a custom command 00:01:04.081 [308/745] Generating lib/rte_stack_mingw with a custom command 00:01:04.344 [309/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:04.344 [310/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:04.344 [311/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:04.344 [312/745] Linking static target lib/librte_rawdev.a 00:01:04.344 [313/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:04.344 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:04.344 [315/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:04.344 [316/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:04.344 [317/745] Linking static target lib/librte_stack.a 00:01:04.344 [318/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:04.344 [319/745] Generating lib/rte_vhost_def with a custom command 00:01:04.344 [320/745] Generating lib/rte_vhost_mingw with a custom command 00:01:04.345 [321/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:04.345 [322/745] Linking static target lib/librte_dmadev.a 00:01:04.345 [323/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:04.345 [324/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.345 [325/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:04.345 [326/745] Linking static target lib/librte_ip_frag.a 00:01:04.605 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:04.605 [328/745] Generating lib/rte_ipsec_def with a custom command 00:01:04.605 [329/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.605 [330/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:04.605 [331/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:04.869 [332/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:04.869 [333/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:04.869 [334/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.869 [335/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.869 [336/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.869 [337/745] Generating lib/rte_fib_def with a custom command 00:01:04.869 [338/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:05.132 [339/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:05.132 [340/745] Generating lib/rte_fib_mingw with a custom command 00:01:05.132 [341/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:05.132 [342/745] Linking static target lib/librte_gso.a 00:01:05.132 [343/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:05.132 [344/745] Linking static target lib/librte_regexdev.a 00:01:05.132 [345/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:05.132 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.393 [347/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:05.393 [348/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.393 [349/745] Linking static target lib/librte_efd.a 00:01:05.393 [350/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:05.393 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:05.393 [352/745] Linking static target lib/librte_pcapng.a 00:01:05.654 [353/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:05.654 [354/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:05.654 [355/745] Linking static target lib/librte_lpm.a 00:01:05.654 [356/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:05.654 [357/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:05.654 [358/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:05.654 [359/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:05.654 [360/745] Linking static target lib/librte_reorder.a 00:01:05.654 [361/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.654 [362/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:05.654 [363/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:05.917 [364/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:05.917 [365/745] Generating lib/rte_port_def with a custom command 00:01:05.917 [366/745] Generating lib/rte_port_mingw with a custom command 00:01:05.917 [367/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.917 [368/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:05.917 [369/745] Generating lib/rte_pdump_mingw with a custom command 00:01:05.917 [370/745] Generating lib/rte_pdump_def with a custom command 00:01:05.917 [371/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:05.917 [372/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:05.917 [373/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:05.917 [374/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:05.917 [375/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:05.917 [376/745] Linking static target lib/acl/libavx2_tmp.a 00:01:05.917 [377/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:05.917 [378/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:05.917 [379/745] Linking static target lib/librte_security.a 00:01:06.182 [380/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:06.182 [381/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:06.182 [382/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.182 [383/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.182 [384/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:06.182 [385/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:06.182 [386/745] Linking static target lib/librte_power.a 00:01:06.441 [387/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.441 [388/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:06.441 [389/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:06.441 [390/745] Linking static target lib/librte_rib.a 00:01:06.441 [391/745] Linking static target lib/librte_hash.a 00:01:06.441 [392/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:06.441 [393/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:06.441 [394/745] Linking static target lib/acl/libavx512_tmp.a 00:01:06.441 [395/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:06.441 [396/745] Linking static target lib/librte_acl.a 00:01:06.441 [397/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:06.704 [398/745] Generating lib/rte_table_def with a custom command 00:01:06.704 [399/745] Generating lib/rte_table_mingw with a custom command 00:01:06.704 [400/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.969 [401/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:06.969 [402/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:06.969 [403/745] Linking static target lib/librte_ethdev.a 00:01:06.970 [404/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.970 [405/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.229 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:07.229 [407/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:07.229 [408/745] Linking static target lib/librte_mbuf.a 00:01:07.229 [409/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:07.229 [410/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:07.229 [411/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:07.229 [412/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:07.229 [413/745] Generating lib/rte_pipeline_def with a custom command 00:01:07.229 [414/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:07.229 [415/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:07.229 [416/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:07.229 [417/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:07.229 [418/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:07.229 [419/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:07.229 [420/745] Generating lib/rte_graph_def with a custom command 00:01:07.492 [421/745] Generating lib/rte_graph_mingw with a custom command 00:01:07.492 [422/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.492 [423/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:07.492 [424/745] Linking static target lib/librte_fib.a 00:01:07.492 [425/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:07.492 [426/745] Linking static target lib/librte_member.a 00:01:07.492 [427/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:07.759 [428/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:07.759 [429/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:07.759 [430/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:07.759 [431/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:07.759 [432/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:07.759 [433/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:07.759 [434/745] Linking static target lib/librte_eventdev.a 00:01:07.759 [435/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.759 [436/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:07.759 [437/745] Generating lib/rte_node_def with a custom command 00:01:07.759 [438/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:07.759 [439/745] Generating lib/rte_node_mingw with a custom command 00:01:08.018 [440/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:08.018 [441/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:08.018 [442/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.018 [443/745] Linking static target lib/librte_sched.a 00:01:08.018 [444/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:08.018 [445/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.018 [446/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:08.018 [447/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.018 [448/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:08.018 [449/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:08.018 [450/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:08.280 [451/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:08.280 [452/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:08.280 [453/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:08.280 [454/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:08.280 [455/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:08.280 [456/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:08.280 [457/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:08.280 [458/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:08.280 [459/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:08.280 [460/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:08.546 [461/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:08.546 [462/745] Linking static target lib/librte_cryptodev.a 00:01:08.546 [463/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:08.546 [464/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:08.546 [465/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:08.546 [466/745] Linking static target lib/librte_pdump.a 00:01:08.546 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:08.546 [468/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:08.546 [469/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:08.546 [470/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:08.546 [471/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:08.546 [472/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:08.546 [473/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:08.546 [474/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:08.810 [475/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.810 [476/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:08.810 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:08.810 [478/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:08.810 [479/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:08.810 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:08.810 [481/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:08.810 [482/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:09.070 [483/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.070 [484/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:09.070 [485/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:09.070 [486/745] Linking static target drivers/librte_bus_vdev.a 00:01:09.070 [487/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:09.070 [488/745] Linking static target lib/librte_table.a 00:01:09.070 [489/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:09.070 [490/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:09.070 [491/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:09.070 [492/745] Linking static target lib/librte_ipsec.a 00:01:09.335 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:09.335 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:09.597 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.597 [496/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:09.597 [497/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:09.597 [498/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:09.597 [499/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:09.597 [500/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:09.597 [501/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:09.597 [502/745] Linking static target lib/librte_graph.a 00:01:09.597 [503/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:09.597 [504/745] Linking static target drivers/librte_bus_pci.a 00:01:09.859 [505/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:09.859 [506/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.859 [507/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:09.859 [508/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:09.859 [509/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:09.859 [510/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:09.859 [511/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:10.126 [512/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:10.126 [513/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:10.126 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.388 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:10.388 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.388 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:10.654 [518/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:10.654 [519/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:10.654 [520/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:10.654 [521/745] Linking static target lib/librte_port.a 00:01:10.654 [522/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:10.654 [523/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:10.654 [524/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:10.918 [525/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.918 [526/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:11.181 [527/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.181 [528/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:11.181 [529/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:11.181 [530/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:11.181 [531/745] Linking static target drivers/librte_mempool_ring.a 00:01:11.181 [532/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:11.181 [533/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:11.446 [534/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:11.446 [535/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:11.446 [536/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:11.446 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:11.446 [538/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:11.709 [539/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:11.709 [540/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.709 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.970 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:11.970 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:11.970 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:12.236 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:12.236 [546/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:12.236 [547/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:12.236 [548/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:12.236 [549/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:12.501 [550/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:12.501 [551/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:12.501 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:12.761 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:13.023 [554/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:13.023 [555/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:13.023 [556/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:13.023 [557/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:13.023 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:13.282 [559/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:13.545 [560/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:13.545 [561/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:13.545 [562/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:13.545 [563/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:13.820 [564/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:13.820 [565/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:13.820 [566/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:13.820 [567/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:13.820 [568/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:13.820 [569/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:13.820 [570/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:13.820 [571/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:13.820 [572/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:14.111 [573/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:14.111 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:14.390 [575/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:14.390 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:14.390 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:14.390 [578/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:14.390 [579/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:14.390 [580/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:14.390 [581/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:14.649 [582/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:14.649 [583/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:14.649 [584/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:14.916 [585/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.175 [586/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:15.175 [587/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:15.175 [588/745] Linking target lib/librte_eal.so.23.0 00:01:15.440 [589/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:15.440 [590/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:15.440 [591/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.440 [592/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:15.440 [593/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:15.440 [594/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:15.440 [595/745] Linking target lib/librte_pci.so.23.0 00:01:15.440 [596/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:15.440 [597/745] Linking target lib/librte_ring.so.23.0 00:01:15.440 [598/745] Linking target lib/librte_meter.so.23.0 00:01:15.440 [599/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:15.440 [600/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:15.440 [601/745] Linking target lib/librte_acl.so.23.0 00:01:15.440 [602/745] Linking target lib/librte_timer.so.23.0 00:01:15.705 [603/745] Linking target lib/librte_cfgfile.so.23.0 00:01:15.705 [604/745] Linking target lib/librte_jobstats.so.23.0 00:01:15.705 [605/745] Linking target lib/librte_dmadev.so.23.0 00:01:15.705 [606/745] Linking target lib/librte_rawdev.so.23.0 00:01:15.705 [607/745] Linking target lib/librte_stack.so.23.0 00:01:15.705 [608/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:15.705 [609/745] Linking target lib/librte_graph.so.23.0 00:01:15.705 [610/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:15.705 [611/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:15.705 [612/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:15.705 [613/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:15.705 [614/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:15.705 [615/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:15.705 [616/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:15.964 [617/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:15.964 [618/745] Linking target lib/librte_rcu.so.23.0 00:01:15.964 [619/745] Linking target lib/librte_mempool.so.23.0 00:01:15.964 [620/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:15.964 [621/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:15.964 [622/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:15.964 [623/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:15.964 [624/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:15.964 [625/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:15.964 [626/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:15.964 [627/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:15.964 [628/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:15.964 [629/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:15.964 [630/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:15.964 [631/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:15.964 [632/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:15.964 [633/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:15.964 [634/745] Linking target lib/librte_rib.so.23.0 00:01:15.964 [635/745] Linking target lib/librte_mbuf.so.23.0 00:01:16.223 [636/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:16.223 [637/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:16.223 [638/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:16.223 [639/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:16.223 [640/745] Linking target lib/librte_compressdev.so.23.0 00:01:16.223 [641/745] Linking target lib/librte_gpudev.so.23.0 00:01:16.223 [642/745] Linking target lib/librte_bbdev.so.23.0 00:01:16.223 [643/745] Linking target lib/librte_sched.so.23.0 00:01:16.223 [644/745] Linking target lib/librte_fib.so.23.0 00:01:16.223 [645/745] Linking target lib/librte_reorder.so.23.0 00:01:16.223 [646/745] Linking target lib/librte_net.so.23.0 00:01:16.223 [647/745] Linking target lib/librte_regexdev.so.23.0 00:01:16.223 [648/745] Linking target lib/librte_distributor.so.23.0 00:01:16.223 [649/745] Linking target lib/librte_cryptodev.so.23.0 00:01:16.223 [650/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:16.483 [651/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:16.483 [652/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:16.483 [653/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:16.483 [654/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:16.483 [655/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:16.483 [656/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:16.483 [657/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:16.483 [658/745] Linking target lib/librte_cmdline.so.23.0 00:01:16.483 [659/745] Linking target lib/librte_hash.so.23.0 00:01:16.483 [660/745] Linking target lib/librte_security.so.23.0 00:01:16.483 [661/745] Linking target lib/librte_ethdev.so.23.0 00:01:16.483 [662/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:16.483 [663/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:16.742 [664/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:16.742 [665/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:16.742 [666/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:16.742 [667/745] Linking target lib/librte_pcapng.so.23.0 00:01:16.742 [668/745] Linking target lib/librte_metrics.so.23.0 00:01:16.742 [669/745] Linking target lib/librte_gso.so.23.0 00:01:16.742 [670/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:16.742 [671/745] Linking target lib/librte_efd.so.23.0 00:01:16.742 [672/745] Linking target lib/librte_lpm.so.23.0 00:01:16.742 [673/745] Linking target lib/librte_member.so.23.0 00:01:16.742 [674/745] Linking target lib/librte_ip_frag.so.23.0 00:01:16.742 [675/745] Linking target lib/librte_ipsec.so.23.0 00:01:16.742 [676/745] Linking target lib/librte_bpf.so.23.0 00:01:16.742 [677/745] Linking target lib/librte_power.so.23.0 00:01:16.742 [678/745] Linking target lib/librte_gro.so.23.0 00:01:16.742 [679/745] Linking target lib/librte_eventdev.so.23.0 00:01:16.742 [680/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:16.742 [681/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:16.742 [682/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:16.742 [683/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:17.012 [684/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:17.012 [685/745] Linking target lib/librte_bitratestats.so.23.0 00:01:17.012 [686/745] Linking target lib/librte_latencystats.so.23.0 00:01:17.012 [687/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:17.012 [688/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:17.012 [689/745] Linking target lib/librte_pdump.so.23.0 00:01:17.012 [690/745] Linking target lib/librte_port.so.23.0 00:01:17.012 [691/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:17.012 [692/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:17.274 [693/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:17.274 [694/745] Linking target lib/librte_table.so.23.0 00:01:17.274 [695/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:17.274 [696/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:17.533 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:17.533 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:17.791 [699/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:17.791 [700/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:17.791 [701/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:17.791 [702/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:18.050 [703/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:18.308 [704/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:18.308 [705/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:18.308 [706/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:18.308 [707/745] Linking static target drivers/librte_net_i40e.a 00:01:18.308 [708/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:18.566 [709/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:18.566 [710/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:18.825 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.825 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:20.201 [713/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:20.460 [714/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:20.460 [715/745] Linking static target lib/librte_node.a 00:01:20.718 [716/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.718 [717/745] Linking target lib/librte_node.so.23.0 00:01:21.283 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:21.540 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:33.729 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:12.459 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:12.459 [722/745] Linking static target lib/librte_vhost.a 00:02:12.719 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.978 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:27.860 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:27.860 [726/745] Linking static target lib/librte_pipeline.a 00:02:27.860 [727/745] Linking target app/dpdk-dumpcap 00:02:27.860 [728/745] Linking target app/dpdk-test-pipeline 00:02:27.860 [729/745] Linking target app/dpdk-test-flow-perf 00:02:27.860 [730/745] Linking target app/dpdk-test-regex 00:02:27.860 [731/745] Linking target app/dpdk-test-security-perf 00:02:27.860 [732/745] Linking target app/dpdk-test-eventdev 00:02:27.860 [733/745] Linking target app/dpdk-test-bbdev 00:02:27.860 [734/745] Linking target app/dpdk-test-compress-perf 00:02:27.860 [735/745] Linking target app/dpdk-test-sad 00:02:27.860 [736/745] Linking target app/dpdk-test-fib 00:02:27.860 [737/745] Linking target app/dpdk-proc-info 00:02:27.860 [738/745] Linking target app/dpdk-test-acl 00:02:27.860 [739/745] Linking target app/dpdk-test-gpudev 00:02:27.860 [740/745] Linking target app/dpdk-test-cmdline 00:02:27.860 [741/745] Linking target app/dpdk-pdump 00:02:27.860 [742/745] Linking target app/dpdk-test-crypto-perf 00:02:27.860 [743/745] Linking target app/dpdk-testpmd 00:02:28.796 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.796 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:28.796 17:49:17 -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:29.054 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:29.054 [0/1] Installing files. 00:02:29.316 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:29.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:29.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:29.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:29.581 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.581 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:29.582 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.151 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.151 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.151 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.151 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.151 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.151 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.151 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.151 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.151 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.151 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:30.151 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.151 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:30.151 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.151 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:30.151 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.151 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:30.151 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.151 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.151 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.151 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.151 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.151 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.151 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.151 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.151 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.151 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.151 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.151 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.151 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.151 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.151 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.151 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.151 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:30.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:30.154 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:30.154 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:30.154 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:30.154 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:30.154 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:30.154 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:30.154 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:30.154 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:30.154 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:30.154 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:30.154 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:30.154 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:30.154 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:30.154 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:30.154 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:30.154 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:30.154 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:30.154 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:30.154 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:30.154 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:30.154 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:30.154 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:30.154 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:30.154 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:30.154 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:30.154 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:30.154 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:30.154 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:30.154 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:30.154 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:30.154 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:30.154 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:30.155 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:30.155 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:30.155 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:30.155 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:30.155 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:30.155 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:30.155 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:30.155 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:30.155 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:30.155 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:30.155 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:30.155 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:30.155 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:30.155 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:30.155 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:30.155 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:30.155 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:30.155 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:30.155 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:30.155 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:30.155 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:30.155 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:30.155 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:30.155 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:30.155 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:30.155 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:30.155 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:30.155 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:30.155 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:30.155 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:30.155 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:30.155 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:30.155 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:30.155 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:30.155 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:30.155 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:30.155 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:30.155 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:30.155 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:30.155 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:30.155 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:30.155 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:30.155 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:30.155 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:30.155 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:30.155 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:30.155 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:30.155 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:30.155 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:30.155 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:30.155 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:30.155 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:30.155 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:30.155 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:30.155 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:30.155 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:30.155 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:30.155 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:30.155 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:30.155 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:30.155 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:30.155 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:30.155 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:30.155 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:30.155 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:30.155 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:30.155 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:30.155 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:30.155 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:30.155 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:30.155 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:30.155 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:30.155 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:30.155 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:30.155 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:30.155 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:30.155 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:30.155 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:30.155 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:30.155 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:30.155 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:30.155 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:30.155 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:30.155 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:30.155 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:30.155 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:30.155 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:30.155 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:30.155 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:30.155 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:30.155 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:30.155 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:30.155 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:30.155 17:49:19 -- common/autobuild_common.sh@189 -- $ uname -s 00:02:30.155 17:49:19 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:30.155 17:49:19 -- common/autobuild_common.sh@200 -- $ cat 00:02:30.155 17:49:19 -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.155 00:02:30.155 real 1m40.299s 00:02:30.155 user 15m8.738s 00:02:30.155 sys 1m49.255s 00:02:30.155 17:49:19 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:30.155 17:49:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.155 ************************************ 00:02:30.155 END TEST build_native_dpdk 00:02:30.155 ************************************ 00:02:30.155 17:49:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:30.155 17:49:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:30.155 17:49:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:30.155 17:49:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:30.155 17:49:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:30.155 17:49:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:30.414 17:49:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:30.414 17:49:19 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:30.414 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:30.414 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.414 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:30.414 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:30.981 Using 'verbs' RDMA provider 00:02:43.776 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:55.976 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:55.976 Creating mk/config.mk...done. 00:02:55.976 Creating mk/cc.flags.mk...done. 00:02:55.976 Type 'make' to build. 00:02:55.976 17:49:43 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:55.976 17:49:43 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:55.976 17:49:43 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:55.976 17:49:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.976 ************************************ 00:02:55.976 START TEST make 00:02:55.976 ************************************ 00:02:55.976 17:49:43 -- common/autotest_common.sh@1111 -- $ make -j48 00:02:55.976 make[1]: Nothing to be done for 'all'. 00:02:56.236 The Meson build system 00:02:56.236 Version: 1.3.1 00:02:56.236 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:56.236 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:56.236 Build type: native build 00:02:56.236 Project name: libvfio-user 00:02:56.236 Project version: 0.0.1 00:02:56.236 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:56.236 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:56.236 Host machine cpu family: x86_64 00:02:56.236 Host machine cpu: x86_64 00:02:56.236 Run-time dependency threads found: YES 00:02:56.236 Library dl found: YES 00:02:56.236 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:56.236 Run-time dependency json-c found: YES 0.17 00:02:56.236 Run-time dependency cmocka found: YES 1.1.7 00:02:56.236 Program pytest-3 found: NO 00:02:56.236 Program flake8 found: NO 00:02:56.236 Program misspell-fixer found: NO 00:02:56.236 Program restructuredtext-lint found: NO 00:02:56.236 Program valgrind found: YES (/usr/bin/valgrind) 00:02:56.236 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:56.236 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:56.236 Compiler for C supports arguments -Wwrite-strings: YES 00:02:56.236 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:56.236 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:56.236 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:56.236 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:56.236 Build targets in project: 8 00:02:56.236 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:56.236 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:56.236 00:02:56.236 libvfio-user 0.0.1 00:02:56.236 00:02:56.236 User defined options 00:02:56.236 buildtype : debug 00:02:56.236 default_library: shared 00:02:56.236 libdir : /usr/local/lib 00:02:56.236 00:02:56.236 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:57.185 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:57.185 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:57.185 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:57.185 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:57.185 [4/37] Compiling C object samples/null.p/null.c.o 00:02:57.185 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:57.185 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:57.185 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:57.185 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:57.185 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:57.185 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:57.185 [11/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:57.185 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:57.185 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:57.185 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:57.446 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:57.446 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:57.446 [17/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:57.446 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:57.446 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:57.446 [20/37] Compiling C object samples/client.p/client.c.o 00:02:57.446 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:57.446 [22/37] Compiling C object samples/server.p/server.c.o 00:02:57.446 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:57.446 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:57.446 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:57.446 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:57.446 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:57.446 [28/37] Linking target samples/client 00:02:57.446 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:57.446 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:57.707 [31/37] Linking target test/unit_tests 00:02:57.707 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:57.707 [33/37] Linking target samples/gpio-pci-idio-16 00:02:57.707 [34/37] Linking target samples/server 00:02:57.707 [35/37] Linking target samples/shadow_ioeventfd_server 00:02:57.707 [36/37] Linking target samples/null 00:02:57.707 [37/37] Linking target samples/lspci 00:02:57.707 INFO: autodetecting backend as ninja 00:02:57.707 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:57.966 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:58.542 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:58.542 ninja: no work to do. 00:03:13.417 CC lib/ut/ut.o 00:03:13.417 CC lib/log/log.o 00:03:13.417 CC lib/log/log_flags.o 00:03:13.417 CC lib/log/log_deprecated.o 00:03:13.417 CC lib/ut_mock/mock.o 00:03:13.417 LIB libspdk_ut_mock.a 00:03:13.417 SO libspdk_ut_mock.so.6.0 00:03:13.417 LIB libspdk_log.a 00:03:13.417 LIB libspdk_ut.a 00:03:13.417 SO libspdk_log.so.7.0 00:03:13.417 SO libspdk_ut.so.2.0 00:03:13.417 SYMLINK libspdk_ut_mock.so 00:03:13.417 SYMLINK libspdk_ut.so 00:03:13.417 SYMLINK libspdk_log.so 00:03:13.417 CC lib/util/base64.o 00:03:13.417 CC lib/dma/dma.o 00:03:13.417 CC lib/util/bit_array.o 00:03:13.417 CC lib/util/cpuset.o 00:03:13.417 CC lib/util/crc16.o 00:03:13.417 CC lib/util/crc32.o 00:03:13.417 CC lib/util/crc32c.o 00:03:13.417 CC lib/util/crc32_ieee.o 00:03:13.417 CC lib/ioat/ioat.o 00:03:13.417 CC lib/util/crc64.o 00:03:13.417 CXX lib/trace_parser/trace.o 00:03:13.417 CC lib/util/dif.o 00:03:13.417 CC lib/util/fd.o 00:03:13.417 CC lib/util/file.o 00:03:13.417 CC lib/util/hexlify.o 00:03:13.417 CC lib/util/iov.o 00:03:13.417 CC lib/util/math.o 00:03:13.417 CC lib/util/pipe.o 00:03:13.417 CC lib/util/strerror_tls.o 00:03:13.417 CC lib/util/string.o 00:03:13.417 CC lib/util/uuid.o 00:03:13.417 CC lib/util/fd_group.o 00:03:13.417 CC lib/util/xor.o 00:03:13.417 CC lib/util/zipf.o 00:03:13.417 CC lib/vfio_user/host/vfio_user_pci.o 00:03:13.417 CC lib/vfio_user/host/vfio_user.o 00:03:13.417 LIB libspdk_dma.a 00:03:13.417 SO libspdk_dma.so.4.0 00:03:13.417 LIB libspdk_ioat.a 00:03:13.417 SYMLINK libspdk_dma.so 00:03:13.417 SO libspdk_ioat.so.7.0 00:03:13.417 LIB libspdk_vfio_user.a 00:03:13.417 SYMLINK libspdk_ioat.so 00:03:13.417 SO libspdk_vfio_user.so.5.0 00:03:13.417 SYMLINK libspdk_vfio_user.so 00:03:13.417 LIB libspdk_util.a 00:03:13.417 SO libspdk_util.so.9.0 00:03:13.417 SYMLINK libspdk_util.so 00:03:13.417 LIB libspdk_trace_parser.a 00:03:13.417 SO libspdk_trace_parser.so.5.0 00:03:13.417 CC lib/env_dpdk/env.o 00:03:13.417 CC lib/idxd/idxd.o 00:03:13.417 CC lib/env_dpdk/memory.o 00:03:13.417 CC lib/idxd/idxd_user.o 00:03:13.417 CC lib/env_dpdk/pci.o 00:03:13.417 CC lib/vmd/vmd.o 00:03:13.417 CC lib/env_dpdk/init.o 00:03:13.417 CC lib/vmd/led.o 00:03:13.417 CC lib/rdma/common.o 00:03:13.417 CC lib/env_dpdk/threads.o 00:03:13.417 CC lib/rdma/rdma_verbs.o 00:03:13.417 CC lib/env_dpdk/pci_ioat.o 00:03:13.417 CC lib/env_dpdk/pci_virtio.o 00:03:13.417 CC lib/env_dpdk/pci_vmd.o 00:03:13.417 CC lib/conf/conf.o 00:03:13.417 CC lib/env_dpdk/pci_idxd.o 00:03:13.417 CC lib/json/json_parse.o 00:03:13.417 CC lib/json/json_util.o 00:03:13.417 CC lib/env_dpdk/pci_event.o 00:03:13.417 CC lib/env_dpdk/sigbus_handler.o 00:03:13.417 CC lib/json/json_write.o 00:03:13.417 CC lib/env_dpdk/pci_dpdk.o 00:03:13.417 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:13.417 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:13.676 SYMLINK libspdk_trace_parser.so 00:03:13.935 LIB libspdk_conf.a 00:03:13.935 SO libspdk_conf.so.6.0 00:03:13.935 LIB libspdk_rdma.a 00:03:13.935 SYMLINK libspdk_conf.so 00:03:13.935 LIB libspdk_json.a 00:03:13.935 SO libspdk_rdma.so.6.0 00:03:13.935 SO libspdk_json.so.6.0 00:03:13.935 SYMLINK libspdk_rdma.so 00:03:13.935 SYMLINK libspdk_json.so 00:03:14.193 LIB libspdk_idxd.a 00:03:14.193 CC lib/jsonrpc/jsonrpc_server.o 00:03:14.193 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:14.193 CC lib/jsonrpc/jsonrpc_client.o 00:03:14.193 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:14.193 SO libspdk_idxd.so.12.0 00:03:14.193 SYMLINK libspdk_idxd.so 00:03:14.193 LIB libspdk_vmd.a 00:03:14.193 SO libspdk_vmd.so.6.0 00:03:14.452 SYMLINK libspdk_vmd.so 00:03:14.452 LIB libspdk_jsonrpc.a 00:03:14.452 SO libspdk_jsonrpc.so.6.0 00:03:14.710 SYMLINK libspdk_jsonrpc.so 00:03:14.710 CC lib/rpc/rpc.o 00:03:15.279 LIB libspdk_rpc.a 00:03:15.568 SO libspdk_rpc.so.6.0 00:03:15.568 SYMLINK libspdk_rpc.so 00:03:15.568 CC lib/keyring/keyring.o 00:03:15.568 CC lib/keyring/keyring_rpc.o 00:03:15.568 CC lib/notify/notify.o 00:03:15.568 CC lib/notify/notify_rpc.o 00:03:15.568 CC lib/trace/trace.o 00:03:15.568 CC lib/trace/trace_flags.o 00:03:15.827 CC lib/trace/trace_rpc.o 00:03:15.827 LIB libspdk_notify.a 00:03:15.827 SO libspdk_notify.so.6.0 00:03:15.827 LIB libspdk_keyring.a 00:03:15.827 LIB libspdk_trace.a 00:03:15.827 SYMLINK libspdk_notify.so 00:03:15.827 SO libspdk_keyring.so.1.0 00:03:16.085 SO libspdk_trace.so.10.0 00:03:16.085 LIB libspdk_env_dpdk.a 00:03:16.085 SYMLINK libspdk_trace.so 00:03:16.085 SYMLINK libspdk_keyring.so 00:03:16.085 SO libspdk_env_dpdk.so.14.0 00:03:16.085 CC lib/thread/thread.o 00:03:16.085 CC lib/thread/iobuf.o 00:03:16.343 CC lib/sock/sock.o 00:03:16.343 CC lib/sock/sock_rpc.o 00:03:16.343 SYMLINK libspdk_env_dpdk.so 00:03:16.909 LIB libspdk_sock.a 00:03:16.909 SO libspdk_sock.so.9.0 00:03:16.909 SYMLINK libspdk_sock.so 00:03:17.168 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:17.168 CC lib/nvme/nvme_ctrlr.o 00:03:17.168 CC lib/nvme/nvme_fabric.o 00:03:17.168 CC lib/nvme/nvme_ns_cmd.o 00:03:17.168 CC lib/nvme/nvme_ns.o 00:03:17.168 CC lib/nvme/nvme_pcie_common.o 00:03:17.168 CC lib/nvme/nvme_pcie.o 00:03:17.168 CC lib/nvme/nvme_qpair.o 00:03:17.168 CC lib/nvme/nvme.o 00:03:17.168 CC lib/nvme/nvme_quirks.o 00:03:17.168 CC lib/nvme/nvme_transport.o 00:03:17.168 CC lib/nvme/nvme_discovery.o 00:03:17.168 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:17.168 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:17.168 CC lib/nvme/nvme_tcp.o 00:03:17.168 CC lib/nvme/nvme_opal.o 00:03:17.168 CC lib/nvme/nvme_io_msg.o 00:03:17.168 CC lib/nvme/nvme_poll_group.o 00:03:17.168 CC lib/nvme/nvme_zns.o 00:03:17.168 CC lib/nvme/nvme_stubs.o 00:03:17.168 CC lib/nvme/nvme_auth.o 00:03:17.168 CC lib/nvme/nvme_cuse.o 00:03:17.168 CC lib/nvme/nvme_rdma.o 00:03:17.168 CC lib/nvme/nvme_vfio_user.o 00:03:18.104 LIB libspdk_thread.a 00:03:18.104 SO libspdk_thread.so.10.0 00:03:18.104 SYMLINK libspdk_thread.so 00:03:18.363 CC lib/vfu_tgt/tgt_endpoint.o 00:03:18.363 CC lib/blob/blobstore.o 00:03:18.363 CC lib/virtio/virtio.o 00:03:18.363 CC lib/init/json_config.o 00:03:18.363 CC lib/vfu_tgt/tgt_rpc.o 00:03:18.363 CC lib/accel/accel.o 00:03:18.363 CC lib/blob/request.o 00:03:18.363 CC lib/virtio/virtio_vhost_user.o 00:03:18.363 CC lib/accel/accel_rpc.o 00:03:18.363 CC lib/init/subsystem.o 00:03:18.363 CC lib/blob/zeroes.o 00:03:18.363 CC lib/virtio/virtio_vfio_user.o 00:03:18.363 CC lib/accel/accel_sw.o 00:03:18.363 CC lib/blob/blob_bs_dev.o 00:03:18.363 CC lib/init/subsystem_rpc.o 00:03:18.363 CC lib/virtio/virtio_pci.o 00:03:18.363 CC lib/init/rpc.o 00:03:18.620 LIB libspdk_init.a 00:03:18.620 SO libspdk_init.so.5.0 00:03:18.620 LIB libspdk_virtio.a 00:03:18.620 LIB libspdk_vfu_tgt.a 00:03:18.620 SYMLINK libspdk_init.so 00:03:18.620 SO libspdk_vfu_tgt.so.3.0 00:03:18.620 SO libspdk_virtio.so.7.0 00:03:18.879 SYMLINK libspdk_vfu_tgt.so 00:03:18.879 SYMLINK libspdk_virtio.so 00:03:18.879 CC lib/event/app.o 00:03:18.879 CC lib/event/reactor.o 00:03:18.879 CC lib/event/log_rpc.o 00:03:18.879 CC lib/event/app_rpc.o 00:03:18.879 CC lib/event/scheduler_static.o 00:03:19.446 LIB libspdk_event.a 00:03:19.446 SO libspdk_event.so.13.0 00:03:19.446 SYMLINK libspdk_event.so 00:03:19.446 LIB libspdk_accel.a 00:03:19.704 SO libspdk_accel.so.15.0 00:03:19.704 SYMLINK libspdk_accel.so 00:03:19.963 CC lib/bdev/bdev.o 00:03:19.963 CC lib/bdev/bdev_rpc.o 00:03:19.963 CC lib/bdev/part.o 00:03:19.963 CC lib/bdev/bdev_zone.o 00:03:19.963 CC lib/bdev/scsi_nvme.o 00:03:19.963 LIB libspdk_nvme.a 00:03:19.963 SO libspdk_nvme.so.13.0 00:03:20.529 SYMLINK libspdk_nvme.so 00:03:23.820 LIB libspdk_blob.a 00:03:23.820 SO libspdk_blob.so.11.0 00:03:23.820 SYMLINK libspdk_blob.so 00:03:23.820 LIB libspdk_bdev.a 00:03:23.820 CC lib/blobfs/blobfs.o 00:03:23.820 CC lib/blobfs/tree.o 00:03:23.820 CC lib/lvol/lvol.o 00:03:23.820 SO libspdk_bdev.so.15.0 00:03:23.820 SYMLINK libspdk_bdev.so 00:03:23.820 CC lib/nbd/nbd.o 00:03:23.820 CC lib/nbd/nbd_rpc.o 00:03:23.820 CC lib/nvmf/ctrlr.o 00:03:23.820 CC lib/nvmf/ctrlr_discovery.o 00:03:23.820 CC lib/nvmf/ctrlr_bdev.o 00:03:23.820 CC lib/nvmf/subsystem.o 00:03:23.820 CC lib/nvmf/nvmf.o 00:03:23.820 CC lib/nvmf/nvmf_rpc.o 00:03:23.820 CC lib/scsi/dev.o 00:03:23.820 CC lib/nvmf/transport.o 00:03:23.820 CC lib/scsi/lun.o 00:03:23.820 CC lib/ublk/ublk.o 00:03:23.820 CC lib/nvmf/tcp.o 00:03:23.820 CC lib/scsi/port.o 00:03:23.820 CC lib/ublk/ublk_rpc.o 00:03:23.820 CC lib/nvmf/vfio_user.o 00:03:23.820 CC lib/scsi/scsi.o 00:03:23.820 CC lib/scsi/scsi_bdev.o 00:03:23.820 CC lib/nvmf/rdma.o 00:03:23.820 CC lib/ftl/ftl_core.o 00:03:23.820 CC lib/scsi/scsi_pr.o 00:03:23.820 CC lib/ftl/ftl_init.o 00:03:23.820 CC lib/ftl/ftl_layout.o 00:03:23.820 CC lib/scsi/scsi_rpc.o 00:03:23.820 CC lib/ftl/ftl_debug.o 00:03:23.821 CC lib/scsi/task.o 00:03:23.821 CC lib/ftl/ftl_sb.o 00:03:23.821 CC lib/ftl/ftl_io.o 00:03:23.821 CC lib/ftl/ftl_l2p.o 00:03:23.821 CC lib/ftl/ftl_l2p_flat.o 00:03:23.821 CC lib/ftl/ftl_nv_cache.o 00:03:23.821 CC lib/ftl/ftl_band.o 00:03:23.821 CC lib/ftl/ftl_band_ops.o 00:03:23.821 CC lib/ftl/ftl_writer.o 00:03:23.821 CC lib/ftl/ftl_rq.o 00:03:23.821 CC lib/ftl/ftl_reloc.o 00:03:23.821 CC lib/ftl/ftl_l2p_cache.o 00:03:23.821 CC lib/ftl/ftl_p2l.o 00:03:23.821 CC lib/ftl/mngt/ftl_mngt.o 00:03:23.821 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:23.821 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:23.821 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:23.821 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:23.821 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:23.821 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:23.821 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:24.393 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:24.393 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:24.393 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:24.393 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:24.393 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:24.393 CC lib/ftl/utils/ftl_conf.o 00:03:24.393 CC lib/ftl/utils/ftl_md.o 00:03:24.393 CC lib/ftl/utils/ftl_mempool.o 00:03:24.393 CC lib/ftl/utils/ftl_bitmap.o 00:03:24.393 CC lib/ftl/utils/ftl_property.o 00:03:24.393 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:24.393 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:24.393 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:24.393 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:24.393 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:24.393 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:24.393 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:24.393 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:24.393 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:24.393 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:24.393 CC lib/ftl/base/ftl_base_dev.o 00:03:24.393 CC lib/ftl/base/ftl_base_bdev.o 00:03:24.652 CC lib/ftl/ftl_trace.o 00:03:24.652 LIB libspdk_nbd.a 00:03:24.652 SO libspdk_nbd.so.7.0 00:03:24.652 SYMLINK libspdk_nbd.so 00:03:24.911 LIB libspdk_scsi.a 00:03:24.911 SO libspdk_scsi.so.9.0 00:03:24.911 LIB libspdk_blobfs.a 00:03:24.911 SO libspdk_blobfs.so.10.0 00:03:24.911 SYMLINK libspdk_scsi.so 00:03:24.911 SYMLINK libspdk_blobfs.so 00:03:24.911 LIB libspdk_lvol.a 00:03:24.911 SO libspdk_lvol.so.10.0 00:03:24.911 LIB libspdk_ublk.a 00:03:24.911 SYMLINK libspdk_lvol.so 00:03:25.169 SO libspdk_ublk.so.3.0 00:03:25.169 CC lib/vhost/vhost.o 00:03:25.169 CC lib/vhost/vhost_rpc.o 00:03:25.169 CC lib/vhost/vhost_scsi.o 00:03:25.169 CC lib/vhost/vhost_blk.o 00:03:25.169 CC lib/iscsi/conn.o 00:03:25.169 CC lib/vhost/rte_vhost_user.o 00:03:25.169 CC lib/iscsi/init_grp.o 00:03:25.169 CC lib/iscsi/iscsi.o 00:03:25.169 CC lib/iscsi/md5.o 00:03:25.169 CC lib/iscsi/param.o 00:03:25.169 CC lib/iscsi/portal_grp.o 00:03:25.169 CC lib/iscsi/tgt_node.o 00:03:25.169 CC lib/iscsi/iscsi_subsystem.o 00:03:25.169 CC lib/iscsi/iscsi_rpc.o 00:03:25.169 CC lib/iscsi/task.o 00:03:25.169 SYMLINK libspdk_ublk.so 00:03:25.428 LIB libspdk_ftl.a 00:03:25.428 SO libspdk_ftl.so.9.0 00:03:25.995 SYMLINK libspdk_ftl.so 00:03:26.253 LIB libspdk_vhost.a 00:03:26.253 SO libspdk_vhost.so.8.0 00:03:26.511 SYMLINK libspdk_vhost.so 00:03:26.511 LIB libspdk_nvmf.a 00:03:26.770 SO libspdk_nvmf.so.18.0 00:03:26.770 LIB libspdk_iscsi.a 00:03:26.770 SO libspdk_iscsi.so.8.0 00:03:27.028 SYMLINK libspdk_nvmf.so 00:03:27.028 SYMLINK libspdk_iscsi.so 00:03:27.287 CC module/env_dpdk/env_dpdk_rpc.o 00:03:27.287 CC module/vfu_device/vfu_virtio.o 00:03:27.287 CC module/vfu_device/vfu_virtio_blk.o 00:03:27.287 CC module/vfu_device/vfu_virtio_scsi.o 00:03:27.287 CC module/vfu_device/vfu_virtio_rpc.o 00:03:27.545 CC module/keyring/file/keyring_rpc.o 00:03:27.545 CC module/keyring/file/keyring.o 00:03:27.545 CC module/accel/error/accel_error.o 00:03:27.545 CC module/accel/error/accel_error_rpc.o 00:03:27.545 CC module/scheduler/gscheduler/gscheduler.o 00:03:27.545 CC module/accel/iaa/accel_iaa.o 00:03:27.545 CC module/accel/dsa/accel_dsa.o 00:03:27.545 CC module/accel/iaa/accel_iaa_rpc.o 00:03:27.545 CC module/accel/dsa/accel_dsa_rpc.o 00:03:27.545 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:27.545 CC module/blob/bdev/blob_bdev.o 00:03:27.545 CC module/sock/posix/posix.o 00:03:27.545 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:27.545 CC module/accel/ioat/accel_ioat.o 00:03:27.545 CC module/accel/ioat/accel_ioat_rpc.o 00:03:27.545 LIB libspdk_env_dpdk_rpc.a 00:03:27.545 SO libspdk_env_dpdk_rpc.so.6.0 00:03:27.545 SYMLINK libspdk_env_dpdk_rpc.so 00:03:27.545 LIB libspdk_scheduler_gscheduler.a 00:03:27.545 LIB libspdk_keyring_file.a 00:03:27.545 LIB libspdk_scheduler_dpdk_governor.a 00:03:27.545 SO libspdk_scheduler_gscheduler.so.4.0 00:03:27.803 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:27.803 SO libspdk_keyring_file.so.1.0 00:03:27.803 LIB libspdk_accel_ioat.a 00:03:27.803 LIB libspdk_accel_error.a 00:03:27.803 LIB libspdk_scheduler_dynamic.a 00:03:27.803 SO libspdk_accel_error.so.2.0 00:03:27.803 SO libspdk_accel_ioat.so.6.0 00:03:27.803 SYMLINK libspdk_scheduler_gscheduler.so 00:03:27.803 LIB libspdk_accel_iaa.a 00:03:27.803 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:27.803 SO libspdk_scheduler_dynamic.so.4.0 00:03:27.803 SYMLINK libspdk_keyring_file.so 00:03:27.803 LIB libspdk_accel_dsa.a 00:03:27.803 SO libspdk_accel_iaa.so.3.0 00:03:27.803 SO libspdk_accel_dsa.so.5.0 00:03:27.803 SYMLINK libspdk_accel_ioat.so 00:03:27.803 SYMLINK libspdk_accel_error.so 00:03:27.803 SYMLINK libspdk_scheduler_dynamic.so 00:03:27.803 LIB libspdk_blob_bdev.a 00:03:27.803 SYMLINK libspdk_accel_iaa.so 00:03:27.803 SO libspdk_blob_bdev.so.11.0 00:03:27.803 SYMLINK libspdk_accel_dsa.so 00:03:27.803 SYMLINK libspdk_blob_bdev.so 00:03:28.065 LIB libspdk_vfu_device.a 00:03:28.065 CC module/bdev/gpt/gpt.o 00:03:28.065 CC module/bdev/malloc/bdev_malloc.o 00:03:28.065 CC module/bdev/gpt/vbdev_gpt.o 00:03:28.065 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:28.065 CC module/bdev/passthru/vbdev_passthru.o 00:03:28.065 CC module/blobfs/bdev/blobfs_bdev.o 00:03:28.065 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:28.065 CC module/bdev/delay/vbdev_delay.o 00:03:28.065 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:28.065 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:28.065 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:28.065 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:28.065 CC module/bdev/raid/bdev_raid.o 00:03:28.065 CC module/bdev/raid/bdev_raid_rpc.o 00:03:28.065 CC module/bdev/error/vbdev_error.o 00:03:28.065 CC module/bdev/raid/bdev_raid_sb.o 00:03:28.065 CC module/bdev/nvme/bdev_nvme.o 00:03:28.065 CC module/bdev/raid/raid0.o 00:03:28.065 CC module/bdev/aio/bdev_aio.o 00:03:28.065 CC module/bdev/error/vbdev_error_rpc.o 00:03:28.065 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:28.065 CC module/bdev/aio/bdev_aio_rpc.o 00:03:28.065 CC module/bdev/ftl/bdev_ftl.o 00:03:28.065 CC module/bdev/iscsi/bdev_iscsi.o 00:03:28.065 CC module/bdev/raid/raid1.o 00:03:28.065 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:28.065 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:28.065 CC module/bdev/nvme/nvme_rpc.o 00:03:28.065 CC module/bdev/raid/concat.o 00:03:28.065 CC module/bdev/lvol/vbdev_lvol.o 00:03:28.065 CC module/bdev/nvme/bdev_mdns_client.o 00:03:28.065 CC module/bdev/nvme/vbdev_opal.o 00:03:28.065 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:28.065 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:28.065 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:28.065 CC module/bdev/split/vbdev_split.o 00:03:28.065 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:28.065 CC module/bdev/null/bdev_null.o 00:03:28.065 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:28.065 CC module/bdev/split/vbdev_split_rpc.o 00:03:28.065 CC module/bdev/null/bdev_null_rpc.o 00:03:28.065 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:28.065 SO libspdk_vfu_device.so.3.0 00:03:28.327 SYMLINK libspdk_vfu_device.so 00:03:28.327 LIB libspdk_sock_posix.a 00:03:28.327 SO libspdk_sock_posix.so.6.0 00:03:28.585 SYMLINK libspdk_sock_posix.so 00:03:28.585 LIB libspdk_blobfs_bdev.a 00:03:28.585 SO libspdk_blobfs_bdev.so.6.0 00:03:28.585 LIB libspdk_bdev_iscsi.a 00:03:28.585 LIB libspdk_bdev_split.a 00:03:28.585 SO libspdk_bdev_split.so.6.0 00:03:28.585 SO libspdk_bdev_iscsi.so.6.0 00:03:28.585 SYMLINK libspdk_blobfs_bdev.so 00:03:28.585 LIB libspdk_bdev_passthru.a 00:03:28.585 LIB libspdk_bdev_zone_block.a 00:03:28.585 LIB libspdk_bdev_error.a 00:03:28.585 LIB libspdk_bdev_gpt.a 00:03:28.585 SO libspdk_bdev_passthru.so.6.0 00:03:28.585 SO libspdk_bdev_zone_block.so.6.0 00:03:28.585 LIB libspdk_bdev_null.a 00:03:28.585 SO libspdk_bdev_error.so.6.0 00:03:28.585 SO libspdk_bdev_gpt.so.6.0 00:03:28.585 LIB libspdk_bdev_ftl.a 00:03:28.585 SYMLINK libspdk_bdev_split.so 00:03:28.585 SYMLINK libspdk_bdev_iscsi.so 00:03:28.585 SO libspdk_bdev_null.so.6.0 00:03:28.585 LIB libspdk_bdev_malloc.a 00:03:28.585 SO libspdk_bdev_ftl.so.6.0 00:03:28.844 SYMLINK libspdk_bdev_passthru.so 00:03:28.844 SYMLINK libspdk_bdev_zone_block.so 00:03:28.844 LIB libspdk_bdev_aio.a 00:03:28.844 SYMLINK libspdk_bdev_error.so 00:03:28.844 SYMLINK libspdk_bdev_gpt.so 00:03:28.844 SO libspdk_bdev_malloc.so.6.0 00:03:28.844 SO libspdk_bdev_aio.so.6.0 00:03:28.844 SYMLINK libspdk_bdev_null.so 00:03:28.844 SYMLINK libspdk_bdev_ftl.so 00:03:28.844 LIB libspdk_bdev_lvol.a 00:03:28.844 LIB libspdk_bdev_delay.a 00:03:28.844 SYMLINK libspdk_bdev_malloc.so 00:03:28.844 SO libspdk_bdev_delay.so.6.0 00:03:28.844 SO libspdk_bdev_lvol.so.6.0 00:03:28.844 SYMLINK libspdk_bdev_aio.so 00:03:28.844 SYMLINK libspdk_bdev_delay.so 00:03:28.844 SYMLINK libspdk_bdev_lvol.so 00:03:28.844 LIB libspdk_bdev_virtio.a 00:03:28.844 SO libspdk_bdev_virtio.so.6.0 00:03:29.103 SYMLINK libspdk_bdev_virtio.so 00:03:29.361 LIB libspdk_bdev_raid.a 00:03:29.361 SO libspdk_bdev_raid.so.6.0 00:03:29.619 SYMLINK libspdk_bdev_raid.so 00:03:31.024 LIB libspdk_bdev_nvme.a 00:03:31.024 SO libspdk_bdev_nvme.so.7.0 00:03:31.024 SYMLINK libspdk_bdev_nvme.so 00:03:31.283 CC module/event/subsystems/sock/sock.o 00:03:31.283 CC module/event/subsystems/vmd/vmd.o 00:03:31.283 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:31.283 CC module/event/subsystems/keyring/keyring.o 00:03:31.283 CC module/event/subsystems/iobuf/iobuf.o 00:03:31.283 CC module/event/subsystems/scheduler/scheduler.o 00:03:31.283 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:31.283 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:31.283 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:31.541 LIB libspdk_event_keyring.a 00:03:31.541 LIB libspdk_event_sock.a 00:03:31.541 LIB libspdk_event_scheduler.a 00:03:31.541 LIB libspdk_event_vfu_tgt.a 00:03:31.541 LIB libspdk_event_vhost_blk.a 00:03:31.541 LIB libspdk_event_vmd.a 00:03:31.541 SO libspdk_event_keyring.so.1.0 00:03:31.541 LIB libspdk_event_iobuf.a 00:03:31.541 SO libspdk_event_sock.so.5.0 00:03:31.541 SO libspdk_event_vfu_tgt.so.3.0 00:03:31.541 SO libspdk_event_scheduler.so.4.0 00:03:31.541 SO libspdk_event_vhost_blk.so.3.0 00:03:31.541 SO libspdk_event_vmd.so.6.0 00:03:31.541 SO libspdk_event_iobuf.so.3.0 00:03:31.541 SYMLINK libspdk_event_keyring.so 00:03:31.541 SYMLINK libspdk_event_sock.so 00:03:31.541 SYMLINK libspdk_event_vfu_tgt.so 00:03:31.541 SYMLINK libspdk_event_scheduler.so 00:03:31.541 SYMLINK libspdk_event_vhost_blk.so 00:03:31.541 SYMLINK libspdk_event_vmd.so 00:03:31.801 SYMLINK libspdk_event_iobuf.so 00:03:31.801 CC module/event/subsystems/accel/accel.o 00:03:32.060 LIB libspdk_event_accel.a 00:03:32.060 SO libspdk_event_accel.so.6.0 00:03:32.318 SYMLINK libspdk_event_accel.so 00:03:32.576 CC module/event/subsystems/bdev/bdev.o 00:03:32.835 LIB libspdk_event_bdev.a 00:03:32.835 SO libspdk_event_bdev.so.6.0 00:03:32.835 SYMLINK libspdk_event_bdev.so 00:03:33.093 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:33.093 CC module/event/subsystems/ublk/ublk.o 00:03:33.093 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:33.093 CC module/event/subsystems/nbd/nbd.o 00:03:33.093 CC module/event/subsystems/scsi/scsi.o 00:03:33.093 LIB libspdk_event_ublk.a 00:03:33.352 LIB libspdk_event_nbd.a 00:03:33.352 SO libspdk_event_ublk.so.3.0 00:03:33.352 LIB libspdk_event_scsi.a 00:03:33.352 SO libspdk_event_nbd.so.6.0 00:03:33.352 SO libspdk_event_scsi.so.6.0 00:03:33.352 SYMLINK libspdk_event_ublk.so 00:03:33.352 SYMLINK libspdk_event_nbd.so 00:03:33.352 LIB libspdk_event_nvmf.a 00:03:33.352 SYMLINK libspdk_event_scsi.so 00:03:33.352 SO libspdk_event_nvmf.so.6.0 00:03:33.352 SYMLINK libspdk_event_nvmf.so 00:03:33.610 CC module/event/subsystems/iscsi/iscsi.o 00:03:33.610 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:33.610 LIB libspdk_event_iscsi.a 00:03:33.868 SO libspdk_event_iscsi.so.6.0 00:03:33.868 LIB libspdk_event_vhost_scsi.a 00:03:33.868 SO libspdk_event_vhost_scsi.so.3.0 00:03:33.868 SYMLINK libspdk_event_iscsi.so 00:03:33.868 SYMLINK libspdk_event_vhost_scsi.so 00:03:33.868 SO libspdk.so.6.0 00:03:33.868 SYMLINK libspdk.so 00:03:34.132 CXX app/trace/trace.o 00:03:34.132 CC app/trace_record/trace_record.o 00:03:34.132 TEST_HEADER include/spdk/accel.h 00:03:34.132 TEST_HEADER include/spdk/accel_module.h 00:03:34.132 TEST_HEADER include/spdk/assert.h 00:03:34.132 TEST_HEADER include/spdk/barrier.h 00:03:34.132 TEST_HEADER include/spdk/base64.h 00:03:34.132 TEST_HEADER include/spdk/bdev.h 00:03:34.132 TEST_HEADER include/spdk/bdev_module.h 00:03:34.132 CC app/spdk_nvme_discover/discovery_aer.o 00:03:34.132 TEST_HEADER include/spdk/bdev_zone.h 00:03:34.132 TEST_HEADER include/spdk/bit_array.h 00:03:34.132 CC app/spdk_top/spdk_top.o 00:03:34.132 TEST_HEADER include/spdk/bit_pool.h 00:03:34.132 CC app/spdk_nvme_identify/identify.o 00:03:34.132 CC test/rpc_client/rpc_client_test.o 00:03:34.132 CC app/spdk_nvme_perf/perf.o 00:03:34.132 TEST_HEADER include/spdk/blob_bdev.h 00:03:34.132 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:34.132 CC app/spdk_lspci/spdk_lspci.o 00:03:34.132 TEST_HEADER include/spdk/blobfs.h 00:03:34.132 TEST_HEADER include/spdk/blob.h 00:03:34.132 TEST_HEADER include/spdk/conf.h 00:03:34.132 TEST_HEADER include/spdk/config.h 00:03:34.132 TEST_HEADER include/spdk/cpuset.h 00:03:34.132 TEST_HEADER include/spdk/crc16.h 00:03:34.132 TEST_HEADER include/spdk/crc32.h 00:03:34.132 TEST_HEADER include/spdk/crc64.h 00:03:34.132 TEST_HEADER include/spdk/dif.h 00:03:34.398 TEST_HEADER include/spdk/dma.h 00:03:34.398 TEST_HEADER include/spdk/endian.h 00:03:34.398 TEST_HEADER include/spdk/env_dpdk.h 00:03:34.398 TEST_HEADER include/spdk/env.h 00:03:34.398 TEST_HEADER include/spdk/event.h 00:03:34.398 TEST_HEADER include/spdk/fd_group.h 00:03:34.398 TEST_HEADER include/spdk/fd.h 00:03:34.398 TEST_HEADER include/spdk/file.h 00:03:34.398 TEST_HEADER include/spdk/ftl.h 00:03:34.398 TEST_HEADER include/spdk/gpt_spec.h 00:03:34.398 CC app/spdk_dd/spdk_dd.o 00:03:34.398 TEST_HEADER include/spdk/hexlify.h 00:03:34.398 TEST_HEADER include/spdk/histogram_data.h 00:03:34.398 CC app/iscsi_tgt/iscsi_tgt.o 00:03:34.398 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:34.398 TEST_HEADER include/spdk/idxd.h 00:03:34.398 CC app/vhost/vhost.o 00:03:34.398 TEST_HEADER include/spdk/idxd_spec.h 00:03:34.398 TEST_HEADER include/spdk/init.h 00:03:34.398 CC app/nvmf_tgt/nvmf_main.o 00:03:34.398 TEST_HEADER include/spdk/ioat.h 00:03:34.398 TEST_HEADER include/spdk/ioat_spec.h 00:03:34.398 TEST_HEADER include/spdk/iscsi_spec.h 00:03:34.398 TEST_HEADER include/spdk/json.h 00:03:34.398 TEST_HEADER include/spdk/jsonrpc.h 00:03:34.398 TEST_HEADER include/spdk/keyring.h 00:03:34.398 TEST_HEADER include/spdk/keyring_module.h 00:03:34.398 TEST_HEADER include/spdk/likely.h 00:03:34.398 TEST_HEADER include/spdk/log.h 00:03:34.398 CC app/spdk_tgt/spdk_tgt.o 00:03:34.398 TEST_HEADER include/spdk/lvol.h 00:03:34.398 TEST_HEADER include/spdk/memory.h 00:03:34.398 TEST_HEADER include/spdk/mmio.h 00:03:34.398 TEST_HEADER include/spdk/nbd.h 00:03:34.398 CC test/app/stub/stub.o 00:03:34.398 CC test/app/histogram_perf/histogram_perf.o 00:03:34.399 TEST_HEADER include/spdk/notify.h 00:03:34.399 CC test/app/jsoncat/jsoncat.o 00:03:34.399 CC app/fio/nvme/fio_plugin.o 00:03:34.399 TEST_HEADER include/spdk/nvme.h 00:03:34.399 CC test/env/vtophys/vtophys.o 00:03:34.399 CC test/thread/poller_perf/poller_perf.o 00:03:34.399 CC test/nvme/aer/aer.o 00:03:34.399 TEST_HEADER include/spdk/nvme_intel.h 00:03:34.399 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:34.399 CC test/env/pci/pci_ut.o 00:03:34.399 CC examples/ioat/perf/perf.o 00:03:34.399 CC test/env/memory/memory_ut.o 00:03:34.399 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:34.399 CC test/event/event_perf/event_perf.o 00:03:34.399 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:34.399 CC examples/sock/hello_world/hello_sock.o 00:03:34.399 CC examples/ioat/verify/verify.o 00:03:34.399 TEST_HEADER include/spdk/nvme_spec.h 00:03:34.399 TEST_HEADER include/spdk/nvme_zns.h 00:03:34.399 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:34.399 CC examples/idxd/perf/perf.o 00:03:34.399 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:34.399 TEST_HEADER include/spdk/nvmf.h 00:03:34.399 CC examples/util/zipf/zipf.o 00:03:34.399 CC examples/vmd/lsvmd/lsvmd.o 00:03:34.399 CC examples/nvme/hello_world/hello_world.o 00:03:34.399 CC examples/accel/perf/accel_perf.o 00:03:34.399 TEST_HEADER include/spdk/nvmf_spec.h 00:03:34.399 TEST_HEADER include/spdk/nvmf_transport.h 00:03:34.399 TEST_HEADER include/spdk/opal.h 00:03:34.399 TEST_HEADER include/spdk/opal_spec.h 00:03:34.399 TEST_HEADER include/spdk/pci_ids.h 00:03:34.399 TEST_HEADER include/spdk/pipe.h 00:03:34.399 TEST_HEADER include/spdk/queue.h 00:03:34.399 TEST_HEADER include/spdk/reduce.h 00:03:34.399 TEST_HEADER include/spdk/rpc.h 00:03:34.399 TEST_HEADER include/spdk/scheduler.h 00:03:34.399 TEST_HEADER include/spdk/scsi.h 00:03:34.399 TEST_HEADER include/spdk/scsi_spec.h 00:03:34.399 TEST_HEADER include/spdk/sock.h 00:03:34.399 CC examples/bdev/hello_world/hello_bdev.o 00:03:34.399 TEST_HEADER include/spdk/stdinc.h 00:03:34.399 CC test/dma/test_dma/test_dma.o 00:03:34.399 TEST_HEADER include/spdk/string.h 00:03:34.399 TEST_HEADER include/spdk/thread.h 00:03:34.399 CC test/bdev/bdevio/bdevio.o 00:03:34.399 CC test/blobfs/mkfs/mkfs.o 00:03:34.399 TEST_HEADER include/spdk/trace.h 00:03:34.399 CC examples/nvmf/nvmf/nvmf.o 00:03:34.399 TEST_HEADER include/spdk/trace_parser.h 00:03:34.399 CC examples/blob/hello_world/hello_blob.o 00:03:34.399 CC test/accel/dif/dif.o 00:03:34.399 CC app/fio/bdev/fio_plugin.o 00:03:34.399 TEST_HEADER include/spdk/tree.h 00:03:34.399 CC test/app/bdev_svc/bdev_svc.o 00:03:34.399 CC examples/bdev/bdevperf/bdevperf.o 00:03:34.399 TEST_HEADER include/spdk/ublk.h 00:03:34.399 CC examples/thread/thread/thread_ex.o 00:03:34.399 TEST_HEADER include/spdk/util.h 00:03:34.399 TEST_HEADER include/spdk/uuid.h 00:03:34.399 TEST_HEADER include/spdk/version.h 00:03:34.399 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:34.399 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:34.399 TEST_HEADER include/spdk/vhost.h 00:03:34.399 TEST_HEADER include/spdk/vmd.h 00:03:34.399 TEST_HEADER include/spdk/xor.h 00:03:34.399 TEST_HEADER include/spdk/zipf.h 00:03:34.658 CXX test/cpp_headers/accel.o 00:03:34.658 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:34.658 LINK spdk_lspci 00:03:34.658 CC test/env/mem_callbacks/mem_callbacks.o 00:03:34.658 CC test/lvol/esnap/esnap.o 00:03:34.658 LINK rpc_client_test 00:03:34.658 LINK spdk_nvme_discover 00:03:34.658 LINK jsoncat 00:03:34.658 LINK poller_perf 00:03:34.658 LINK event_perf 00:03:34.658 LINK vtophys 00:03:34.658 LINK histogram_perf 00:03:34.658 LINK interrupt_tgt 00:03:34.658 LINK vhost 00:03:34.658 LINK lsvmd 00:03:34.658 LINK spdk_trace_record 00:03:34.658 LINK nvmf_tgt 00:03:34.658 LINK zipf 00:03:34.658 LINK iscsi_tgt 00:03:34.658 LINK env_dpdk_post_init 00:03:34.658 LINK stub 00:03:34.926 LINK spdk_tgt 00:03:34.926 LINK ioat_perf 00:03:34.926 LINK verify 00:03:34.926 LINK hello_sock 00:03:34.926 LINK bdev_svc 00:03:34.926 LINK hello_world 00:03:34.926 LINK mkfs 00:03:34.926 CXX test/cpp_headers/accel_module.o 00:03:34.926 LINK mem_callbacks 00:03:34.926 LINK hello_blob 00:03:34.926 LINK hello_bdev 00:03:34.926 LINK aer 00:03:34.926 LINK thread 00:03:34.926 LINK spdk_dd 00:03:35.192 LINK nvmf 00:03:35.192 LINK idxd_perf 00:03:35.192 CXX test/cpp_headers/assert.o 00:03:35.192 LINK pci_ut 00:03:35.192 LINK spdk_trace 00:03:35.192 CC test/event/reactor/reactor.o 00:03:35.192 CC test/nvme/reset/reset.o 00:03:35.192 LINK test_dma 00:03:35.192 CC examples/vmd/led/led.o 00:03:35.192 LINK dif 00:03:35.192 LINK bdevio 00:03:35.192 CC test/nvme/sgl/sgl.o 00:03:35.192 CC examples/nvme/reconnect/reconnect.o 00:03:35.192 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:35.192 CXX test/cpp_headers/barrier.o 00:03:35.192 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:35.192 CXX test/cpp_headers/base64.o 00:03:35.192 CC test/nvme/overhead/overhead.o 00:03:35.458 CC test/nvme/e2edp/nvme_dp.o 00:03:35.458 CXX test/cpp_headers/bdev.o 00:03:35.458 CC examples/blob/cli/blobcli.o 00:03:35.458 LINK accel_perf 00:03:35.458 CXX test/cpp_headers/bdev_module.o 00:03:35.458 CC test/event/reactor_perf/reactor_perf.o 00:03:35.458 LINK nvme_fuzz 00:03:35.458 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:35.458 LINK memory_ut 00:03:35.458 CC examples/nvme/arbitration/arbitration.o 00:03:35.458 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:35.458 CC test/nvme/err_injection/err_injection.o 00:03:35.458 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:35.458 CC examples/nvme/hotplug/hotplug.o 00:03:35.458 CC test/event/app_repeat/app_repeat.o 00:03:35.458 LINK spdk_nvme 00:03:35.458 CXX test/cpp_headers/bdev_zone.o 00:03:35.458 CXX test/cpp_headers/bit_array.o 00:03:35.458 CC examples/nvme/abort/abort.o 00:03:35.458 LINK reactor 00:03:35.458 CXX test/cpp_headers/bit_pool.o 00:03:35.458 CC test/event/scheduler/scheduler.o 00:03:35.458 LINK spdk_bdev 00:03:35.458 CXX test/cpp_headers/blob_bdev.o 00:03:35.458 LINK led 00:03:35.458 CXX test/cpp_headers/blobfs_bdev.o 00:03:35.724 CC test/nvme/startup/startup.o 00:03:35.724 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:35.724 CXX test/cpp_headers/blobfs.o 00:03:35.724 CXX test/cpp_headers/blob.o 00:03:35.724 CXX test/cpp_headers/conf.o 00:03:35.724 CXX test/cpp_headers/config.o 00:03:35.724 LINK reactor_perf 00:03:35.724 LINK reset 00:03:35.724 CXX test/cpp_headers/cpuset.o 00:03:35.724 CXX test/cpp_headers/crc16.o 00:03:35.724 CXX test/cpp_headers/crc32.o 00:03:35.724 LINK sgl 00:03:35.724 CXX test/cpp_headers/crc64.o 00:03:35.724 CXX test/cpp_headers/dif.o 00:03:35.724 LINK app_repeat 00:03:35.724 CC test/nvme/reserve/reserve.o 00:03:35.724 CXX test/cpp_headers/dma.o 00:03:35.724 LINK cmb_copy 00:03:35.724 CC test/nvme/simple_copy/simple_copy.o 00:03:35.724 LINK err_injection 00:03:35.984 LINK spdk_nvme_perf 00:03:35.984 CC test/nvme/connect_stress/connect_stress.o 00:03:35.984 CXX test/cpp_headers/endian.o 00:03:35.984 LINK nvme_dp 00:03:35.984 LINK overhead 00:03:35.984 CXX test/cpp_headers/env_dpdk.o 00:03:35.984 CC test/nvme/boot_partition/boot_partition.o 00:03:35.984 CXX test/cpp_headers/env.o 00:03:35.984 LINK hotplug 00:03:35.984 CC test/nvme/compliance/nvme_compliance.o 00:03:35.984 CXX test/cpp_headers/event.o 00:03:35.984 LINK reconnect 00:03:35.984 LINK spdk_top 00:03:35.984 LINK startup 00:03:35.984 LINK pmr_persistence 00:03:35.985 CC test/nvme/fused_ordering/fused_ordering.o 00:03:35.985 LINK bdevperf 00:03:35.985 CXX test/cpp_headers/fd_group.o 00:03:35.985 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:35.985 CXX test/cpp_headers/fd.o 00:03:35.985 CXX test/cpp_headers/file.o 00:03:35.985 LINK scheduler 00:03:35.985 CC test/nvme/cuse/cuse.o 00:03:35.985 CXX test/cpp_headers/ftl.o 00:03:35.985 CC test/nvme/fdp/fdp.o 00:03:35.985 CXX test/cpp_headers/gpt_spec.o 00:03:35.985 LINK spdk_nvme_identify 00:03:35.985 CXX test/cpp_headers/hexlify.o 00:03:35.985 LINK arbitration 00:03:36.251 CXX test/cpp_headers/histogram_data.o 00:03:36.251 CXX test/cpp_headers/idxd.o 00:03:36.251 CXX test/cpp_headers/idxd_spec.o 00:03:36.251 CXX test/cpp_headers/init.o 00:03:36.251 CXX test/cpp_headers/ioat.o 00:03:36.251 CXX test/cpp_headers/ioat_spec.o 00:03:36.251 CXX test/cpp_headers/iscsi_spec.o 00:03:36.251 CXX test/cpp_headers/json.o 00:03:36.251 LINK abort 00:03:36.251 CXX test/cpp_headers/jsonrpc.o 00:03:36.251 LINK vhost_fuzz 00:03:36.251 CXX test/cpp_headers/keyring.o 00:03:36.251 CXX test/cpp_headers/keyring_module.o 00:03:36.251 LINK reserve 00:03:36.251 LINK boot_partition 00:03:36.251 LINK connect_stress 00:03:36.251 LINK blobcli 00:03:36.251 CXX test/cpp_headers/likely.o 00:03:36.251 CXX test/cpp_headers/log.o 00:03:36.251 CXX test/cpp_headers/lvol.o 00:03:36.251 CXX test/cpp_headers/memory.o 00:03:36.251 LINK nvme_manage 00:03:36.251 CXX test/cpp_headers/mmio.o 00:03:36.251 CXX test/cpp_headers/nbd.o 00:03:36.251 CXX test/cpp_headers/notify.o 00:03:36.251 CXX test/cpp_headers/nvme.o 00:03:36.251 CXX test/cpp_headers/nvme_intel.o 00:03:36.251 CXX test/cpp_headers/nvme_ocssd.o 00:03:36.251 LINK simple_copy 00:03:36.251 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:36.251 CXX test/cpp_headers/nvme_spec.o 00:03:36.251 CXX test/cpp_headers/nvme_zns.o 00:03:36.513 CXX test/cpp_headers/nvmf_cmd.o 00:03:36.513 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:36.513 CXX test/cpp_headers/nvmf.o 00:03:36.513 CXX test/cpp_headers/nvmf_spec.o 00:03:36.513 CXX test/cpp_headers/nvmf_transport.o 00:03:36.513 CXX test/cpp_headers/opal.o 00:03:36.513 CXX test/cpp_headers/opal_spec.o 00:03:36.513 LINK fused_ordering 00:03:36.513 LINK doorbell_aers 00:03:36.513 CXX test/cpp_headers/pci_ids.o 00:03:36.513 CXX test/cpp_headers/pipe.o 00:03:36.513 CXX test/cpp_headers/queue.o 00:03:36.513 CXX test/cpp_headers/reduce.o 00:03:36.513 CXX test/cpp_headers/rpc.o 00:03:36.513 CXX test/cpp_headers/scheduler.o 00:03:36.513 CXX test/cpp_headers/scsi.o 00:03:36.513 CXX test/cpp_headers/scsi_spec.o 00:03:36.513 CXX test/cpp_headers/sock.o 00:03:36.513 CXX test/cpp_headers/stdinc.o 00:03:36.513 CXX test/cpp_headers/string.o 00:03:36.513 CXX test/cpp_headers/thread.o 00:03:36.513 CXX test/cpp_headers/trace.o 00:03:36.513 CXX test/cpp_headers/trace_parser.o 00:03:36.513 CXX test/cpp_headers/tree.o 00:03:36.513 LINK nvme_compliance 00:03:36.513 CXX test/cpp_headers/ublk.o 00:03:36.513 CXX test/cpp_headers/util.o 00:03:36.513 CXX test/cpp_headers/uuid.o 00:03:36.771 CXX test/cpp_headers/version.o 00:03:36.771 CXX test/cpp_headers/vfio_user_pci.o 00:03:36.771 CXX test/cpp_headers/vfio_user_spec.o 00:03:36.771 CXX test/cpp_headers/vhost.o 00:03:36.771 CXX test/cpp_headers/vmd.o 00:03:36.771 LINK fdp 00:03:36.771 CXX test/cpp_headers/xor.o 00:03:36.771 CXX test/cpp_headers/zipf.o 00:03:37.705 LINK cuse 00:03:37.963 LINK iscsi_fuzz 00:03:44.525 LINK esnap 00:03:44.525 00:03:44.525 real 0m50.222s 00:03:44.525 user 7m56.635s 00:03:44.525 sys 1m50.042s 00:03:44.525 17:50:33 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:03:44.525 17:50:33 -- common/autotest_common.sh@10 -- $ set +x 00:03:44.525 ************************************ 00:03:44.525 END TEST make 00:03:44.525 ************************************ 00:03:44.525 17:50:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:44.525 17:50:33 -- pm/common@30 -- $ signal_monitor_resources TERM 00:03:44.525 17:50:33 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:03:44.525 17:50:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.525 17:50:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:44.525 17:50:33 -- pm/common@45 -- $ pid=3083667 00:03:44.525 17:50:33 -- pm/common@52 -- $ sudo kill -TERM 3083667 00:03:44.782 17:50:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.782 17:50:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:44.782 17:50:33 -- pm/common@45 -- $ pid=3083672 00:03:44.782 17:50:33 -- pm/common@52 -- $ sudo kill -TERM 3083672 00:03:44.782 17:50:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.782 17:50:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:44.782 17:50:33 -- pm/common@45 -- $ pid=3083670 00:03:44.782 17:50:33 -- pm/common@52 -- $ sudo kill -TERM 3083670 00:03:44.782 17:50:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.782 17:50:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:44.782 17:50:33 -- pm/common@45 -- $ pid=3083675 00:03:44.782 17:50:33 -- pm/common@52 -- $ sudo kill -TERM 3083675 00:03:45.041 17:50:33 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:45.041 17:50:33 -- nvmf/common.sh@7 -- # uname -s 00:03:45.041 17:50:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:45.041 17:50:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:45.041 17:50:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:45.041 17:50:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:45.041 17:50:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:45.041 17:50:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:45.041 17:50:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:45.041 17:50:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:45.041 17:50:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:45.041 17:50:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:45.041 17:50:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:03:45.041 17:50:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:03:45.041 17:50:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:45.041 17:50:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:45.041 17:50:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:45.041 17:50:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:45.041 17:50:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:45.041 17:50:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:45.041 17:50:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:45.041 17:50:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:45.041 17:50:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.041 17:50:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.041 17:50:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.041 17:50:33 -- paths/export.sh@5 -- # export PATH 00:03:45.041 17:50:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.041 17:50:33 -- nvmf/common.sh@47 -- # : 0 00:03:45.041 17:50:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:45.041 17:50:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:45.041 17:50:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:45.041 17:50:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:45.041 17:50:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:45.041 17:50:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:45.041 17:50:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:45.041 17:50:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:45.041 17:50:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:45.041 17:50:33 -- spdk/autotest.sh@32 -- # uname -s 00:03:45.041 17:50:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:45.042 17:50:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:45.042 17:50:33 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:45.042 17:50:33 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:45.042 17:50:33 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:45.042 17:50:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:45.042 17:50:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:45.042 17:50:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:45.042 17:50:33 -- spdk/autotest.sh@48 -- # udevadm_pid=3163072 00:03:45.042 17:50:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:45.042 17:50:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:45.042 17:50:33 -- pm/common@17 -- # local monitor 00:03:45.042 17:50:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.042 17:50:33 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3163074 00:03:45.042 17:50:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.042 17:50:33 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3163076 00:03:45.042 17:50:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.042 17:50:33 -- pm/common@21 -- # date +%s 00:03:45.042 17:50:33 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3163079 00:03:45.042 17:50:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.042 17:50:33 -- pm/common@21 -- # date +%s 00:03:45.042 17:50:33 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3163083 00:03:45.042 17:50:33 -- pm/common@21 -- # date +%s 00:03:45.042 17:50:33 -- pm/common@26 -- # sleep 1 00:03:45.042 17:50:33 -- pm/common@21 -- # date +%s 00:03:45.042 17:50:33 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713196233 00:03:45.042 17:50:33 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713196233 00:03:45.042 17:50:33 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713196233 00:03:45.042 17:50:33 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713196233 00:03:45.042 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713196233_collect-vmstat.pm.log 00:03:45.042 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713196233_collect-bmc-pm.bmc.pm.log 00:03:45.042 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713196233_collect-cpu-load.pm.log 00:03:45.042 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713196233_collect-cpu-temp.pm.log 00:03:45.976 17:50:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:45.976 17:50:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:45.976 17:50:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:45.976 17:50:34 -- common/autotest_common.sh@10 -- # set +x 00:03:45.976 17:50:34 -- spdk/autotest.sh@59 -- # create_test_list 00:03:45.976 17:50:34 -- common/autotest_common.sh@734 -- # xtrace_disable 00:03:45.976 17:50:34 -- common/autotest_common.sh@10 -- # set +x 00:03:45.976 17:50:34 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:45.976 17:50:34 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:45.976 17:50:34 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:45.976 17:50:34 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:45.976 17:50:34 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:45.976 17:50:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:45.976 17:50:34 -- common/autotest_common.sh@1441 -- # uname 00:03:45.976 17:50:34 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:03:45.976 17:50:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:45.976 17:50:34 -- common/autotest_common.sh@1461 -- # uname 00:03:45.976 17:50:34 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:03:45.976 17:50:34 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:45.976 17:50:34 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:45.976 17:50:34 -- spdk/autotest.sh@72 -- # hash lcov 00:03:45.976 17:50:34 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:45.976 17:50:34 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:45.976 --rc lcov_branch_coverage=1 00:03:45.976 --rc lcov_function_coverage=1 00:03:45.976 --rc genhtml_branch_coverage=1 00:03:45.976 --rc genhtml_function_coverage=1 00:03:45.976 --rc genhtml_legend=1 00:03:45.976 --rc geninfo_all_blocks=1 00:03:45.976 ' 00:03:45.976 17:50:34 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:45.976 --rc lcov_branch_coverage=1 00:03:45.976 --rc lcov_function_coverage=1 00:03:45.976 --rc genhtml_branch_coverage=1 00:03:45.976 --rc genhtml_function_coverage=1 00:03:45.976 --rc genhtml_legend=1 00:03:45.976 --rc geninfo_all_blocks=1 00:03:45.976 ' 00:03:45.976 17:50:34 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:45.976 --rc lcov_branch_coverage=1 00:03:45.976 --rc lcov_function_coverage=1 00:03:45.976 --rc genhtml_branch_coverage=1 00:03:45.976 --rc genhtml_function_coverage=1 00:03:45.976 --rc genhtml_legend=1 00:03:45.976 --rc geninfo_all_blocks=1 00:03:45.976 --no-external' 00:03:45.976 17:50:34 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:45.976 --rc lcov_branch_coverage=1 00:03:45.976 --rc lcov_function_coverage=1 00:03:45.976 --rc genhtml_branch_coverage=1 00:03:45.976 --rc genhtml_function_coverage=1 00:03:45.976 --rc genhtml_legend=1 00:03:45.976 --rc geninfo_all_blocks=1 00:03:45.976 --no-external' 00:03:45.976 17:50:34 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:46.233 lcov: LCOV version 1.14 00:03:46.233 17:50:34 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:04.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:04.345 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:05.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:05.279 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:05.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:05.279 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:05.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:05.279 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:31.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:31.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:31.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:31.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:31.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:31.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:31.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:31.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:31.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:31.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:31.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:31.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:31.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:31.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:31.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:31.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:31.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:31.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:31.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:31.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:31.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:31.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:31.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:31.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:31.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:31.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:31.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:31.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:31.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:31.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:31.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:31.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:31.813 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:31.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:31.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:31.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:32.072 17:51:20 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:32.072 17:51:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:32.072 17:51:20 -- common/autotest_common.sh@10 -- # set +x 00:04:32.072 17:51:20 -- spdk/autotest.sh@91 -- # rm -f 00:04:32.072 17:51:20 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.453 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:04:33.454 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:33.454 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:33.454 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:33.454 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:33.454 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:33.454 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:33.454 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:33.454 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:33.454 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:33.454 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:33.454 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:33.454 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:33.454 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:33.454 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:33.454 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:33.454 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:33.454 17:51:22 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:33.454 17:51:22 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:33.454 17:51:22 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:33.454 17:51:22 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:33.454 17:51:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:33.454 17:51:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:33.454 17:51:22 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:33.454 17:51:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:33.454 17:51:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:33.454 17:51:22 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:33.454 17:51:22 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:33.454 17:51:22 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:33.454 17:51:22 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:33.454 17:51:22 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:33.454 17:51:22 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:33.718 No valid GPT data, bailing 00:04:33.718 17:51:22 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:33.718 17:51:22 -- scripts/common.sh@391 -- # pt= 00:04:33.718 17:51:22 -- scripts/common.sh@392 -- # return 1 00:04:33.718 17:51:22 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:33.718 1+0 records in 00:04:33.718 1+0 records out 00:04:33.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0024579 s, 427 MB/s 00:04:33.718 17:51:22 -- spdk/autotest.sh@118 -- # sync 00:04:33.718 17:51:22 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:33.718 17:51:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:33.718 17:51:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:36.247 17:51:24 -- spdk/autotest.sh@124 -- # uname -s 00:04:36.247 17:51:24 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:36.247 17:51:24 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:36.247 17:51:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.247 17:51:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.247 17:51:24 -- common/autotest_common.sh@10 -- # set +x 00:04:36.247 ************************************ 00:04:36.247 START TEST setup.sh 00:04:36.247 ************************************ 00:04:36.247 17:51:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:36.247 * Looking for test storage... 00:04:36.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:36.247 17:51:24 -- setup/test-setup.sh@10 -- # uname -s 00:04:36.247 17:51:24 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:36.247 17:51:24 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:36.247 17:51:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.247 17:51:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.247 17:51:24 -- common/autotest_common.sh@10 -- # set +x 00:04:36.247 ************************************ 00:04:36.247 START TEST acl 00:04:36.247 ************************************ 00:04:36.247 17:51:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:36.247 * Looking for test storage... 00:04:36.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:36.247 17:51:24 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:36.247 17:51:24 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:36.247 17:51:24 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:36.247 17:51:24 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:36.247 17:51:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:36.247 17:51:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:36.247 17:51:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:36.247 17:51:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:36.247 17:51:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:36.247 17:51:24 -- setup/acl.sh@12 -- # devs=() 00:04:36.247 17:51:24 -- setup/acl.sh@12 -- # declare -a devs 00:04:36.247 17:51:24 -- setup/acl.sh@13 -- # drivers=() 00:04:36.247 17:51:24 -- setup/acl.sh@13 -- # declare -A drivers 00:04:36.247 17:51:24 -- setup/acl.sh@51 -- # setup reset 00:04:36.247 17:51:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.247 17:51:24 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:37.639 17:51:26 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:37.639 17:51:26 -- setup/acl.sh@16 -- # local dev driver 00:04:37.639 17:51:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.639 17:51:26 -- setup/acl.sh@15 -- # setup output status 00:04:37.639 17:51:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.640 17:51:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:39.029 Hugepages 00:04:39.029 node hugesize free / total 00:04:39.029 17:51:27 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:39.029 17:51:27 -- setup/acl.sh@19 -- # continue 00:04:39.029 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.029 17:51:27 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:39.029 17:51:27 -- setup/acl.sh@19 -- # continue 00:04:39.029 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.029 17:51:27 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:39.029 17:51:27 -- setup/acl.sh@19 -- # continue 00:04:39.029 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.029 00:04:39.030 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # continue 00:04:39.030 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # continue 00:04:39.030 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # continue 00:04:39.030 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # continue 00:04:39.030 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # continue 00:04:39.030 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # continue 00:04:39.030 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # continue 00:04:39.030 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # continue 00:04:39.030 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # continue 00:04:39.030 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # continue 00:04:39.030 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # continue 00:04:39.030 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # continue 00:04:39.030 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # continue 00:04:39.030 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # continue 00:04:39.030 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # continue 00:04:39.030 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # continue 00:04:39.030 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # continue 00:04:39.030 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.030 17:51:27 -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:39.030 17:51:27 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:04:39.030 17:51:27 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:39.030 17:51:27 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:39.030 17:51:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.030 17:51:27 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:39.030 17:51:27 -- setup/acl.sh@54 -- # run_test denied denied 00:04:39.030 17:51:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:39.030 17:51:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.030 17:51:27 -- common/autotest_common.sh@10 -- # set +x 00:04:39.030 ************************************ 00:04:39.030 START TEST denied 00:04:39.030 ************************************ 00:04:39.030 17:51:27 -- common/autotest_common.sh@1111 -- # denied 00:04:39.030 17:51:27 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:04:39.030 17:51:27 -- setup/acl.sh@38 -- # setup output config 00:04:39.030 17:51:27 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:04:39.030 17:51:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.030 17:51:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:40.404 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:04:40.404 17:51:29 -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:04:40.404 17:51:29 -- setup/acl.sh@28 -- # local dev driver 00:04:40.404 17:51:29 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:40.404 17:51:29 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:04:40.404 17:51:29 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:04:40.404 17:51:29 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:40.404 17:51:29 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:40.404 17:51:29 -- setup/acl.sh@41 -- # setup reset 00:04:40.404 17:51:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:40.404 17:51:29 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.934 00:04:42.934 real 0m3.772s 00:04:42.934 user 0m1.047s 00:04:42.934 sys 0m1.923s 00:04:42.934 17:51:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:42.934 17:51:31 -- common/autotest_common.sh@10 -- # set +x 00:04:42.934 ************************************ 00:04:42.934 END TEST denied 00:04:42.934 ************************************ 00:04:42.934 17:51:31 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:42.934 17:51:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.934 17:51:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.934 17:51:31 -- common/autotest_common.sh@10 -- # set +x 00:04:42.934 ************************************ 00:04:42.934 START TEST allowed 00:04:42.934 ************************************ 00:04:42.934 17:51:31 -- common/autotest_common.sh@1111 -- # allowed 00:04:42.934 17:51:31 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:04:42.934 17:51:31 -- setup/acl.sh@45 -- # setup output config 00:04:42.934 17:51:31 -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:04:42.934 17:51:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.934 17:51:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:45.461 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:45.461 17:51:34 -- setup/acl.sh@47 -- # verify 00:04:45.461 17:51:34 -- setup/acl.sh@28 -- # local dev driver 00:04:45.461 17:51:34 -- setup/acl.sh@48 -- # setup reset 00:04:45.461 17:51:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.461 17:51:34 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:46.836 00:04:46.836 real 0m3.914s 00:04:46.836 user 0m0.945s 00:04:46.836 sys 0m1.906s 00:04:46.836 17:51:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:46.836 17:51:35 -- common/autotest_common.sh@10 -- # set +x 00:04:46.836 ************************************ 00:04:46.836 END TEST allowed 00:04:46.836 ************************************ 00:04:46.836 00:04:46.836 real 0m10.761s 00:04:46.836 user 0m3.088s 00:04:46.836 sys 0m5.874s 00:04:46.836 17:51:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:46.836 17:51:35 -- common/autotest_common.sh@10 -- # set +x 00:04:46.836 ************************************ 00:04:46.836 END TEST acl 00:04:46.836 ************************************ 00:04:46.836 17:51:35 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:46.836 17:51:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.836 17:51:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.836 17:51:35 -- common/autotest_common.sh@10 -- # set +x 00:04:47.095 ************************************ 00:04:47.095 START TEST hugepages 00:04:47.095 ************************************ 00:04:47.095 17:51:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:47.095 * Looking for test storage... 00:04:47.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:47.095 17:51:35 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:47.095 17:51:35 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:47.095 17:51:35 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:47.095 17:51:35 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:47.095 17:51:35 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:47.095 17:51:35 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:47.095 17:51:35 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:47.096 17:51:35 -- setup/common.sh@18 -- # local node= 00:04:47.096 17:51:35 -- setup/common.sh@19 -- # local var val 00:04:47.096 17:51:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.096 17:51:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.096 17:51:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.096 17:51:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.096 17:51:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.096 17:51:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 22138512 kB' 'MemAvailable: 26349616 kB' 'Buffers: 2696 kB' 'Cached: 15004128 kB' 'SwapCached: 0 kB' 'Active: 11758844 kB' 'Inactive: 3691820 kB' 'Active(anon): 11133884 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 447512 kB' 'Mapped: 210280 kB' 'Shmem: 10690044 kB' 'KReclaimable: 440680 kB' 'Slab: 824560 kB' 'SReclaimable: 440680 kB' 'SUnreclaim: 383880 kB' 'KernelStack: 12640 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304784 kB' 'Committed_AS: 12332312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196992 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.096 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.096 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # continue 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.097 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.097 17:51:35 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.097 17:51:35 -- setup/common.sh@33 -- # echo 2048 00:04:47.097 17:51:35 -- setup/common.sh@33 -- # return 0 00:04:47.097 17:51:35 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:47.097 17:51:35 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:47.097 17:51:35 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:47.097 17:51:35 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:47.097 17:51:35 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:47.097 17:51:35 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:47.097 17:51:35 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:47.097 17:51:35 -- setup/hugepages.sh@207 -- # get_nodes 00:04:47.097 17:51:35 -- setup/hugepages.sh@27 -- # local node 00:04:47.097 17:51:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.097 17:51:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:47.097 17:51:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.097 17:51:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:47.097 17:51:35 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:47.097 17:51:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.097 17:51:35 -- setup/hugepages.sh@208 -- # clear_hp 00:04:47.097 17:51:35 -- setup/hugepages.sh@37 -- # local node hp 00:04:47.097 17:51:35 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:47.097 17:51:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.097 17:51:35 -- setup/hugepages.sh@41 -- # echo 0 00:04:47.097 17:51:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.097 17:51:35 -- setup/hugepages.sh@41 -- # echo 0 00:04:47.097 17:51:35 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:47.097 17:51:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.097 17:51:35 -- setup/hugepages.sh@41 -- # echo 0 00:04:47.097 17:51:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.097 17:51:35 -- setup/hugepages.sh@41 -- # echo 0 00:04:47.097 17:51:35 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:47.097 17:51:35 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:47.097 17:51:35 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:47.097 17:51:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.097 17:51:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.097 17:51:35 -- common/autotest_common.sh@10 -- # set +x 00:04:47.355 ************************************ 00:04:47.356 START TEST default_setup 00:04:47.356 ************************************ 00:04:47.356 17:51:36 -- common/autotest_common.sh@1111 -- # default_setup 00:04:47.356 17:51:36 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:47.356 17:51:36 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:47.356 17:51:36 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:47.356 17:51:36 -- setup/hugepages.sh@51 -- # shift 00:04:47.356 17:51:36 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:47.356 17:51:36 -- setup/hugepages.sh@52 -- # local node_ids 00:04:47.356 17:51:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.356 17:51:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:47.356 17:51:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:47.356 17:51:36 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:47.356 17:51:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.356 17:51:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:47.356 17:51:36 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:47.356 17:51:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.356 17:51:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.356 17:51:36 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:47.356 17:51:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:47.356 17:51:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:47.356 17:51:36 -- setup/hugepages.sh@73 -- # return 0 00:04:47.356 17:51:36 -- setup/hugepages.sh@137 -- # setup output 00:04:47.356 17:51:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.356 17:51:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:48.729 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:48.729 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:48.729 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:48.729 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:48.729 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:48.729 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:48.729 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:48.729 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:48.729 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:48.729 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:48.729 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:48.730 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:48.730 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:48.730 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:48.730 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:48.730 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:49.664 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:49.664 17:51:38 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:49.664 17:51:38 -- setup/hugepages.sh@89 -- # local node 00:04:49.664 17:51:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.664 17:51:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.664 17:51:38 -- setup/hugepages.sh@92 -- # local surp 00:04:49.664 17:51:38 -- setup/hugepages.sh@93 -- # local resv 00:04:49.664 17:51:38 -- setup/hugepages.sh@94 -- # local anon 00:04:49.664 17:51:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.664 17:51:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.664 17:51:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.664 17:51:38 -- setup/common.sh@18 -- # local node= 00:04:49.664 17:51:38 -- setup/common.sh@19 -- # local var val 00:04:49.664 17:51:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.664 17:51:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.664 17:51:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.664 17:51:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.664 17:51:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.664 17:51:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.664 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.664 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24195764 kB' 'MemAvailable: 28406860 kB' 'Buffers: 2696 kB' 'Cached: 15004228 kB' 'SwapCached: 0 kB' 'Active: 11778156 kB' 'Inactive: 3691820 kB' 'Active(anon): 11153196 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466340 kB' 'Mapped: 210368 kB' 'Shmem: 10690144 kB' 'KReclaimable: 440672 kB' 'Slab: 824428 kB' 'SReclaimable: 440672 kB' 'SUnreclaim: 383756 kB' 'KernelStack: 12608 kB' 'PageTables: 9128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12359328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197152 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.665 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.665 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.666 17:51:38 -- setup/common.sh@33 -- # echo 0 00:04:49.666 17:51:38 -- setup/common.sh@33 -- # return 0 00:04:49.666 17:51:38 -- setup/hugepages.sh@97 -- # anon=0 00:04:49.666 17:51:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.666 17:51:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.666 17:51:38 -- setup/common.sh@18 -- # local node= 00:04:49.666 17:51:38 -- setup/common.sh@19 -- # local var val 00:04:49.666 17:51:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.666 17:51:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.666 17:51:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.666 17:51:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.666 17:51:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.666 17:51:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24203004 kB' 'MemAvailable: 28414100 kB' 'Buffers: 2696 kB' 'Cached: 15004240 kB' 'SwapCached: 0 kB' 'Active: 11779392 kB' 'Inactive: 3691820 kB' 'Active(anon): 11154432 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 467492 kB' 'Mapped: 210368 kB' 'Shmem: 10690156 kB' 'KReclaimable: 440672 kB' 'Slab: 824416 kB' 'SReclaimable: 440672 kB' 'SUnreclaim: 383744 kB' 'KernelStack: 12864 kB' 'PageTables: 9772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12357944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197232 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.666 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.666 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.928 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.928 17:51:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.929 17:51:38 -- setup/common.sh@33 -- # echo 0 00:04:49.929 17:51:38 -- setup/common.sh@33 -- # return 0 00:04:49.929 17:51:38 -- setup/hugepages.sh@99 -- # surp=0 00:04:49.929 17:51:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.929 17:51:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.929 17:51:38 -- setup/common.sh@18 -- # local node= 00:04:49.929 17:51:38 -- setup/common.sh@19 -- # local var val 00:04:49.929 17:51:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.929 17:51:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.929 17:51:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.929 17:51:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.929 17:51:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.929 17:51:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24202640 kB' 'MemAvailable: 28413728 kB' 'Buffers: 2696 kB' 'Cached: 15004244 kB' 'SwapCached: 0 kB' 'Active: 11779408 kB' 'Inactive: 3691820 kB' 'Active(anon): 11154448 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 467512 kB' 'Mapped: 210352 kB' 'Shmem: 10690160 kB' 'KReclaimable: 440664 kB' 'Slab: 824468 kB' 'SReclaimable: 440664 kB' 'SUnreclaim: 383804 kB' 'KernelStack: 13008 kB' 'PageTables: 11036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12359352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197216 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.929 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.929 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.930 17:51:38 -- setup/common.sh@33 -- # echo 0 00:04:49.930 17:51:38 -- setup/common.sh@33 -- # return 0 00:04:49.930 17:51:38 -- setup/hugepages.sh@100 -- # resv=0 00:04:49.930 17:51:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:49.930 nr_hugepages=1024 00:04:49.930 17:51:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.930 resv_hugepages=0 00:04:49.930 17:51:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.930 surplus_hugepages=0 00:04:49.930 17:51:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.930 anon_hugepages=0 00:04:49.930 17:51:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.930 17:51:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:49.930 17:51:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.930 17:51:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.930 17:51:38 -- setup/common.sh@18 -- # local node= 00:04:49.930 17:51:38 -- setup/common.sh@19 -- # local var val 00:04:49.930 17:51:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.930 17:51:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.930 17:51:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.930 17:51:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.930 17:51:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.930 17:51:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24203628 kB' 'MemAvailable: 28414716 kB' 'Buffers: 2696 kB' 'Cached: 15004260 kB' 'SwapCached: 0 kB' 'Active: 11778512 kB' 'Inactive: 3691820 kB' 'Active(anon): 11153552 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466652 kB' 'Mapped: 210788 kB' 'Shmem: 10690176 kB' 'KReclaimable: 440664 kB' 'Slab: 824468 kB' 'SReclaimable: 440664 kB' 'SUnreclaim: 383804 kB' 'KernelStack: 12672 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12360476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197088 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.930 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.930 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.931 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.931 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.932 17:51:38 -- setup/common.sh@33 -- # echo 1024 00:04:49.932 17:51:38 -- setup/common.sh@33 -- # return 0 00:04:49.932 17:51:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.932 17:51:38 -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.932 17:51:38 -- setup/hugepages.sh@27 -- # local node 00:04:49.932 17:51:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.932 17:51:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:49.932 17:51:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.932 17:51:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:49.932 17:51:38 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:49.932 17:51:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.932 17:51:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.932 17:51:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.932 17:51:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.932 17:51:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.932 17:51:38 -- setup/common.sh@18 -- # local node=0 00:04:49.932 17:51:38 -- setup/common.sh@19 -- # local var val 00:04:49.932 17:51:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.932 17:51:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.932 17:51:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.932 17:51:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.932 17:51:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.932 17:51:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 17604788 kB' 'MemUsed: 6967568 kB' 'SwapCached: 0 kB' 'Active: 3783332 kB' 'Inactive: 167944 kB' 'Active(anon): 3452388 kB' 'Inactive(anon): 0 kB' 'Active(file): 330944 kB' 'Inactive(file): 167944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3711108 kB' 'Mapped: 166372 kB' 'AnonPages: 243324 kB' 'Shmem: 3212220 kB' 'KernelStack: 7448 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 294640 kB' 'Slab: 484488 kB' 'SReclaimable: 294640 kB' 'SUnreclaim: 189848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.932 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.932 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # continue 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.933 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.933 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.933 17:51:38 -- setup/common.sh@33 -- # echo 0 00:04:49.933 17:51:38 -- setup/common.sh@33 -- # return 0 00:04:49.933 17:51:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.933 17:51:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.933 17:51:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.933 17:51:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.933 17:51:38 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:49.933 node0=1024 expecting 1024 00:04:49.933 17:51:38 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:49.933 00:04:49.933 real 0m2.696s 00:04:49.933 user 0m0.760s 00:04:49.933 sys 0m0.951s 00:04:49.933 17:51:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.933 17:51:38 -- common/autotest_common.sh@10 -- # set +x 00:04:49.933 ************************************ 00:04:49.933 END TEST default_setup 00:04:49.933 ************************************ 00:04:49.933 17:51:38 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:49.933 17:51:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.933 17:51:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.933 17:51:38 -- common/autotest_common.sh@10 -- # set +x 00:04:50.192 ************************************ 00:04:50.192 START TEST per_node_1G_alloc 00:04:50.192 ************************************ 00:04:50.192 17:51:38 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:04:50.192 17:51:38 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:50.192 17:51:38 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:50.192 17:51:38 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:50.192 17:51:38 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:50.192 17:51:38 -- setup/hugepages.sh@51 -- # shift 00:04:50.192 17:51:38 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:50.192 17:51:38 -- setup/hugepages.sh@52 -- # local node_ids 00:04:50.192 17:51:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:50.192 17:51:38 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:50.192 17:51:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:50.192 17:51:38 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:50.192 17:51:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.192 17:51:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:50.192 17:51:38 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:50.192 17:51:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.192 17:51:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.192 17:51:38 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:50.192 17:51:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:50.192 17:51:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:50.192 17:51:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:50.192 17:51:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:50.192 17:51:38 -- setup/hugepages.sh@73 -- # return 0 00:04:50.192 17:51:38 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:50.192 17:51:38 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:50.192 17:51:38 -- setup/hugepages.sh@146 -- # setup output 00:04:50.192 17:51:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.192 17:51:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:51.574 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:51.574 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:51.574 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:51.574 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:51.574 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:51.574 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:51.574 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:51.574 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:51.574 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:51.574 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:51.574 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:51.574 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:51.574 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:51.574 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:51.574 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:51.574 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:51.574 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:51.574 17:51:40 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:51.574 17:51:40 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:51.574 17:51:40 -- setup/hugepages.sh@89 -- # local node 00:04:51.574 17:51:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:51.574 17:51:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:51.574 17:51:40 -- setup/hugepages.sh@92 -- # local surp 00:04:51.574 17:51:40 -- setup/hugepages.sh@93 -- # local resv 00:04:51.574 17:51:40 -- setup/hugepages.sh@94 -- # local anon 00:04:51.574 17:51:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:51.574 17:51:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:51.574 17:51:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:51.574 17:51:40 -- setup/common.sh@18 -- # local node= 00:04:51.574 17:51:40 -- setup/common.sh@19 -- # local var val 00:04:51.574 17:51:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.574 17:51:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.574 17:51:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.574 17:51:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.574 17:51:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.574 17:51:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24205952 kB' 'MemAvailable: 28417032 kB' 'Buffers: 2696 kB' 'Cached: 15004312 kB' 'SwapCached: 0 kB' 'Active: 11778616 kB' 'Inactive: 3691820 kB' 'Active(anon): 11153656 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466628 kB' 'Mapped: 210384 kB' 'Shmem: 10690228 kB' 'KReclaimable: 440656 kB' 'Slab: 824748 kB' 'SReclaimable: 440656 kB' 'SUnreclaim: 384092 kB' 'KernelStack: 12688 kB' 'PageTables: 9208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12357124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197184 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.574 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.574 17:51:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.575 17:51:40 -- setup/common.sh@33 -- # echo 0 00:04:51.575 17:51:40 -- setup/common.sh@33 -- # return 0 00:04:51.575 17:51:40 -- setup/hugepages.sh@97 -- # anon=0 00:04:51.575 17:51:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:51.575 17:51:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.575 17:51:40 -- setup/common.sh@18 -- # local node= 00:04:51.575 17:51:40 -- setup/common.sh@19 -- # local var val 00:04:51.575 17:51:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.575 17:51:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.575 17:51:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.575 17:51:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.575 17:51:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.575 17:51:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24205976 kB' 'MemAvailable: 28417056 kB' 'Buffers: 2696 kB' 'Cached: 15004316 kB' 'SwapCached: 0 kB' 'Active: 11778160 kB' 'Inactive: 3691820 kB' 'Active(anon): 11153200 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466244 kB' 'Mapped: 210448 kB' 'Shmem: 10690232 kB' 'KReclaimable: 440656 kB' 'Slab: 824800 kB' 'SReclaimable: 440656 kB' 'SUnreclaim: 384144 kB' 'KernelStack: 12736 kB' 'PageTables: 9304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12357136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197152 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.575 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.575 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.576 17:51:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.576 17:51:40 -- setup/common.sh@33 -- # echo 0 00:04:51.576 17:51:40 -- setup/common.sh@33 -- # return 0 00:04:51.576 17:51:40 -- setup/hugepages.sh@99 -- # surp=0 00:04:51.576 17:51:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:51.576 17:51:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:51.576 17:51:40 -- setup/common.sh@18 -- # local node= 00:04:51.576 17:51:40 -- setup/common.sh@19 -- # local var val 00:04:51.576 17:51:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.576 17:51:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.576 17:51:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.576 17:51:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.576 17:51:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.576 17:51:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.576 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24206644 kB' 'MemAvailable: 28417724 kB' 'Buffers: 2696 kB' 'Cached: 15004336 kB' 'SwapCached: 0 kB' 'Active: 11777624 kB' 'Inactive: 3691820 kB' 'Active(anon): 11152664 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 465640 kB' 'Mapped: 210372 kB' 'Shmem: 10690252 kB' 'KReclaimable: 440656 kB' 'Slab: 824784 kB' 'SReclaimable: 440656 kB' 'SUnreclaim: 384128 kB' 'KernelStack: 12720 kB' 'PageTables: 9248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12357152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197136 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.577 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.577 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.578 17:51:40 -- setup/common.sh@33 -- # echo 0 00:04:51.578 17:51:40 -- setup/common.sh@33 -- # return 0 00:04:51.578 17:51:40 -- setup/hugepages.sh@100 -- # resv=0 00:04:51.578 17:51:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:51.578 nr_hugepages=1024 00:04:51.578 17:51:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:51.578 resv_hugepages=0 00:04:51.578 17:51:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:51.578 surplus_hugepages=0 00:04:51.578 17:51:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:51.578 anon_hugepages=0 00:04:51.578 17:51:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.578 17:51:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:51.578 17:51:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:51.578 17:51:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:51.578 17:51:40 -- setup/common.sh@18 -- # local node= 00:04:51.578 17:51:40 -- setup/common.sh@19 -- # local var val 00:04:51.578 17:51:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.578 17:51:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.578 17:51:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.578 17:51:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.578 17:51:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.578 17:51:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24206944 kB' 'MemAvailable: 28418024 kB' 'Buffers: 2696 kB' 'Cached: 15004340 kB' 'SwapCached: 0 kB' 'Active: 11777748 kB' 'Inactive: 3691820 kB' 'Active(anon): 11152788 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 465776 kB' 'Mapped: 210372 kB' 'Shmem: 10690256 kB' 'KReclaimable: 440656 kB' 'Slab: 824784 kB' 'SReclaimable: 440656 kB' 'SUnreclaim: 384128 kB' 'KernelStack: 12736 kB' 'PageTables: 9304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12357164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197136 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.578 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.578 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.579 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.579 17:51:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.579 17:51:40 -- setup/common.sh@33 -- # echo 1024 00:04:51.579 17:51:40 -- setup/common.sh@33 -- # return 0 00:04:51.579 17:51:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.579 17:51:40 -- setup/hugepages.sh@112 -- # get_nodes 00:04:51.579 17:51:40 -- setup/hugepages.sh@27 -- # local node 00:04:51.579 17:51:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.579 17:51:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:51.579 17:51:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.579 17:51:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:51.579 17:51:40 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:51.579 17:51:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.579 17:51:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.579 17:51:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.580 17:51:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:51.580 17:51:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.580 17:51:40 -- setup/common.sh@18 -- # local node=0 00:04:51.580 17:51:40 -- setup/common.sh@19 -- # local var val 00:04:51.580 17:51:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.580 17:51:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.580 17:51:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:51.580 17:51:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:51.580 17:51:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.580 17:51:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 18672100 kB' 'MemUsed: 5900256 kB' 'SwapCached: 0 kB' 'Active: 3777736 kB' 'Inactive: 167944 kB' 'Active(anon): 3446792 kB' 'Inactive(anon): 0 kB' 'Active(file): 330944 kB' 'Inactive(file): 167944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3711108 kB' 'Mapped: 166372 kB' 'AnonPages: 237692 kB' 'Shmem: 3212220 kB' 'KernelStack: 7464 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 294640 kB' 'Slab: 484584 kB' 'SReclaimable: 294640 kB' 'SUnreclaim: 189944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.580 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.580 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@33 -- # echo 0 00:04:51.581 17:51:40 -- setup/common.sh@33 -- # return 0 00:04:51.581 17:51:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.581 17:51:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.581 17:51:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.581 17:51:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:51.581 17:51:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.581 17:51:40 -- setup/common.sh@18 -- # local node=1 00:04:51.581 17:51:40 -- setup/common.sh@19 -- # local var val 00:04:51.581 17:51:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.581 17:51:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.581 17:51:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:51.581 17:51:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:51.581 17:51:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.581 17:51:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454312 kB' 'MemFree: 5535988 kB' 'MemUsed: 13918324 kB' 'SwapCached: 0 kB' 'Active: 7999632 kB' 'Inactive: 3523876 kB' 'Active(anon): 7705616 kB' 'Inactive(anon): 0 kB' 'Active(file): 294016 kB' 'Inactive(file): 3523876 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11295956 kB' 'Mapped: 44000 kB' 'AnonPages: 227644 kB' 'Shmem: 7478064 kB' 'KernelStack: 5256 kB' 'PageTables: 5228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 146016 kB' 'Slab: 340200 kB' 'SReclaimable: 146016 kB' 'SUnreclaim: 194184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.581 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.581 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.582 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.582 17:51:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.582 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.582 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.582 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.582 17:51:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.582 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.582 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.582 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.582 17:51:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.582 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.582 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.582 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.582 17:51:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.582 17:51:40 -- setup/common.sh@32 -- # continue 00:04:51.582 17:51:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.582 17:51:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.582 17:51:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.582 17:51:40 -- setup/common.sh@33 -- # echo 0 00:04:51.582 17:51:40 -- setup/common.sh@33 -- # return 0 00:04:51.582 17:51:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.582 17:51:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.582 17:51:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.582 17:51:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.582 17:51:40 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:51.582 node0=512 expecting 512 00:04:51.582 17:51:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.582 17:51:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.582 17:51:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.582 17:51:40 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:51.582 node1=512 expecting 512 00:04:51.582 17:51:40 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:51.582 00:04:51.582 real 0m1.605s 00:04:51.582 user 0m0.717s 00:04:51.582 sys 0m0.866s 00:04:51.582 17:51:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.582 17:51:40 -- common/autotest_common.sh@10 -- # set +x 00:04:51.582 ************************************ 00:04:51.582 END TEST per_node_1G_alloc 00:04:51.582 ************************************ 00:04:51.839 17:51:40 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:51.839 17:51:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.839 17:51:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.839 17:51:40 -- common/autotest_common.sh@10 -- # set +x 00:04:51.839 ************************************ 00:04:51.839 START TEST even_2G_alloc 00:04:51.839 ************************************ 00:04:51.839 17:51:40 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:04:51.839 17:51:40 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:51.839 17:51:40 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:51.839 17:51:40 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:51.839 17:51:40 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:51.839 17:51:40 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:51.839 17:51:40 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:51.839 17:51:40 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:51.839 17:51:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:51.839 17:51:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:51.839 17:51:40 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:51.839 17:51:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:51.839 17:51:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:51.839 17:51:40 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:51.839 17:51:40 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:51.839 17:51:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:51.839 17:51:40 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:51.839 17:51:40 -- setup/hugepages.sh@83 -- # : 512 00:04:51.839 17:51:40 -- setup/hugepages.sh@84 -- # : 1 00:04:51.839 17:51:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:51.839 17:51:40 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:51.839 17:51:40 -- setup/hugepages.sh@83 -- # : 0 00:04:51.839 17:51:40 -- setup/hugepages.sh@84 -- # : 0 00:04:51.839 17:51:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:51.839 17:51:40 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:51.839 17:51:40 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:51.839 17:51:40 -- setup/hugepages.sh@153 -- # setup output 00:04:51.839 17:51:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.839 17:51:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:53.216 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:53.216 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:53.216 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:53.216 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:53.216 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:53.216 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:53.216 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:53.216 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:53.216 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:53.216 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:53.216 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:53.216 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:53.216 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:53.216 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:53.216 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:53.216 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:53.216 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:53.216 17:51:41 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:53.216 17:51:41 -- setup/hugepages.sh@89 -- # local node 00:04:53.216 17:51:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.216 17:51:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.216 17:51:41 -- setup/hugepages.sh@92 -- # local surp 00:04:53.216 17:51:41 -- setup/hugepages.sh@93 -- # local resv 00:04:53.216 17:51:41 -- setup/hugepages.sh@94 -- # local anon 00:04:53.216 17:51:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.216 17:51:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.216 17:51:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.216 17:51:41 -- setup/common.sh@18 -- # local node= 00:04:53.216 17:51:41 -- setup/common.sh@19 -- # local var val 00:04:53.216 17:51:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.216 17:51:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.216 17:51:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.216 17:51:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.216 17:51:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.216 17:51:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24188328 kB' 'MemAvailable: 28399344 kB' 'Buffers: 2696 kB' 'Cached: 15004412 kB' 'SwapCached: 0 kB' 'Active: 11769508 kB' 'Inactive: 3691820 kB' 'Active(anon): 11144548 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457436 kB' 'Mapped: 209420 kB' 'Shmem: 10690328 kB' 'KReclaimable: 440592 kB' 'Slab: 824832 kB' 'SReclaimable: 440592 kB' 'SUnreclaim: 384240 kB' 'KernelStack: 12576 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12321608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196976 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.216 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.216 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.217 17:51:41 -- setup/common.sh@33 -- # echo 0 00:04:53.217 17:51:41 -- setup/common.sh@33 -- # return 0 00:04:53.217 17:51:41 -- setup/hugepages.sh@97 -- # anon=0 00:04:53.217 17:51:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.217 17:51:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.217 17:51:41 -- setup/common.sh@18 -- # local node= 00:04:53.217 17:51:41 -- setup/common.sh@19 -- # local var val 00:04:53.217 17:51:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.217 17:51:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.217 17:51:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.217 17:51:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.217 17:51:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.217 17:51:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24190388 kB' 'MemAvailable: 28401404 kB' 'Buffers: 2696 kB' 'Cached: 15004416 kB' 'SwapCached: 0 kB' 'Active: 11770320 kB' 'Inactive: 3691820 kB' 'Active(anon): 11145360 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458400 kB' 'Mapped: 209508 kB' 'Shmem: 10690332 kB' 'KReclaimable: 440592 kB' 'Slab: 824868 kB' 'SReclaimable: 440592 kB' 'SUnreclaim: 384276 kB' 'KernelStack: 12592 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12323884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196960 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.217 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.218 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.218 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.218 17:51:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.218 17:51:42 -- setup/common.sh@33 -- # echo 0 00:04:53.218 17:51:42 -- setup/common.sh@33 -- # return 0 00:04:53.218 17:51:42 -- setup/hugepages.sh@99 -- # surp=0 00:04:53.218 17:51:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.218 17:51:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.218 17:51:42 -- setup/common.sh@18 -- # local node= 00:04:53.218 17:51:42 -- setup/common.sh@19 -- # local var val 00:04:53.218 17:51:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.218 17:51:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.218 17:51:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.218 17:51:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.219 17:51:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.219 17:51:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24191088 kB' 'MemAvailable: 28402104 kB' 'Buffers: 2696 kB' 'Cached: 15004420 kB' 'SwapCached: 0 kB' 'Active: 11769140 kB' 'Inactive: 3691820 kB' 'Active(anon): 11144180 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457212 kB' 'Mapped: 209496 kB' 'Shmem: 10690336 kB' 'KReclaimable: 440592 kB' 'Slab: 824868 kB' 'SReclaimable: 440592 kB' 'SUnreclaim: 384276 kB' 'KernelStack: 12688 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12322676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197040 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.219 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.219 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.220 17:51:42 -- setup/common.sh@33 -- # echo 0 00:04:53.220 17:51:42 -- setup/common.sh@33 -- # return 0 00:04:53.220 17:51:42 -- setup/hugepages.sh@100 -- # resv=0 00:04:53.220 17:51:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:53.220 nr_hugepages=1024 00:04:53.220 17:51:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.220 resv_hugepages=0 00:04:53.220 17:51:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.220 surplus_hugepages=0 00:04:53.220 17:51:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.220 anon_hugepages=0 00:04:53.220 17:51:42 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.220 17:51:42 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:53.220 17:51:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.220 17:51:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.220 17:51:42 -- setup/common.sh@18 -- # local node= 00:04:53.220 17:51:42 -- setup/common.sh@19 -- # local var val 00:04:53.220 17:51:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.220 17:51:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.220 17:51:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.220 17:51:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.220 17:51:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.220 17:51:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24191340 kB' 'MemAvailable: 28402348 kB' 'Buffers: 2696 kB' 'Cached: 15004440 kB' 'SwapCached: 0 kB' 'Active: 11770736 kB' 'Inactive: 3691820 kB' 'Active(anon): 11145776 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458816 kB' 'Mapped: 209496 kB' 'Shmem: 10690356 kB' 'KReclaimable: 440584 kB' 'Slab: 824860 kB' 'SReclaimable: 440584 kB' 'SUnreclaim: 384276 kB' 'KernelStack: 12848 kB' 'PageTables: 9500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12324064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197088 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.220 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.220 17:51:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.222 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.222 17:51:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.223 17:51:42 -- setup/common.sh@33 -- # echo 1024 00:04:53.223 17:51:42 -- setup/common.sh@33 -- # return 0 00:04:53.223 17:51:42 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.223 17:51:42 -- setup/hugepages.sh@112 -- # get_nodes 00:04:53.223 17:51:42 -- setup/hugepages.sh@27 -- # local node 00:04:53.223 17:51:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.223 17:51:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:53.223 17:51:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.223 17:51:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:53.223 17:51:42 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:53.223 17:51:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.223 17:51:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.223 17:51:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.223 17:51:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:53.223 17:51:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.223 17:51:42 -- setup/common.sh@18 -- # local node=0 00:04:53.223 17:51:42 -- setup/common.sh@19 -- # local var val 00:04:53.223 17:51:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.223 17:51:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.223 17:51:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.223 17:51:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.223 17:51:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.223 17:51:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 18662256 kB' 'MemUsed: 5910100 kB' 'SwapCached: 0 kB' 'Active: 3775728 kB' 'Inactive: 167944 kB' 'Active(anon): 3444784 kB' 'Inactive(anon): 0 kB' 'Active(file): 330944 kB' 'Inactive(file): 167944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3711148 kB' 'Mapped: 165492 kB' 'AnonPages: 235736 kB' 'Shmem: 3212260 kB' 'KernelStack: 7848 kB' 'PageTables: 5624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 294568 kB' 'Slab: 484696 kB' 'SReclaimable: 294568 kB' 'SUnreclaim: 190128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.223 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.223 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@33 -- # echo 0 00:04:53.224 17:51:42 -- setup/common.sh@33 -- # return 0 00:04:53.224 17:51:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.224 17:51:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.224 17:51:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.224 17:51:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:53.224 17:51:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.224 17:51:42 -- setup/common.sh@18 -- # local node=1 00:04:53.224 17:51:42 -- setup/common.sh@19 -- # local var val 00:04:53.224 17:51:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.224 17:51:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.224 17:51:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:53.224 17:51:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:53.224 17:51:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.224 17:51:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454312 kB' 'MemFree: 5528784 kB' 'MemUsed: 13925528 kB' 'SwapCached: 0 kB' 'Active: 7995048 kB' 'Inactive: 3523876 kB' 'Active(anon): 7701032 kB' 'Inactive(anon): 0 kB' 'Active(file): 294016 kB' 'Inactive(file): 3523876 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11296016 kB' 'Mapped: 43928 kB' 'AnonPages: 222944 kB' 'Shmem: 7478124 kB' 'KernelStack: 5144 kB' 'PageTables: 4712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 146016 kB' 'Slab: 340132 kB' 'SReclaimable: 146016 kB' 'SUnreclaim: 194116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.224 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.224 17:51:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # continue 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.225 17:51:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.225 17:51:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.225 17:51:42 -- setup/common.sh@33 -- # echo 0 00:04:53.225 17:51:42 -- setup/common.sh@33 -- # return 0 00:04:53.225 17:51:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.225 17:51:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.225 17:51:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.225 17:51:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.225 17:51:42 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:53.225 node0=512 expecting 512 00:04:53.225 17:51:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.225 17:51:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.225 17:51:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.225 17:51:42 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:53.225 node1=512 expecting 512 00:04:53.225 17:51:42 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:53.225 00:04:53.225 real 0m1.485s 00:04:53.225 user 0m0.625s 00:04:53.225 sys 0m0.832s 00:04:53.225 17:51:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:53.225 17:51:42 -- common/autotest_common.sh@10 -- # set +x 00:04:53.225 ************************************ 00:04:53.225 END TEST even_2G_alloc 00:04:53.225 ************************************ 00:04:53.494 17:51:42 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:53.494 17:51:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.494 17:51:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.494 17:51:42 -- common/autotest_common.sh@10 -- # set +x 00:04:53.494 ************************************ 00:04:53.494 START TEST odd_alloc 00:04:53.494 ************************************ 00:04:53.494 17:51:42 -- common/autotest_common.sh@1111 -- # odd_alloc 00:04:53.494 17:51:42 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:53.494 17:51:42 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:53.494 17:51:42 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:53.494 17:51:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.494 17:51:42 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:53.494 17:51:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:53.494 17:51:42 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:53.494 17:51:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.494 17:51:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:53.494 17:51:42 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:53.494 17:51:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.494 17:51:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.494 17:51:42 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:53.494 17:51:42 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:53.494 17:51:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.494 17:51:42 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:53.494 17:51:42 -- setup/hugepages.sh@83 -- # : 513 00:04:53.494 17:51:42 -- setup/hugepages.sh@84 -- # : 1 00:04:53.494 17:51:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.494 17:51:42 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:53.494 17:51:42 -- setup/hugepages.sh@83 -- # : 0 00:04:53.494 17:51:42 -- setup/hugepages.sh@84 -- # : 0 00:04:53.494 17:51:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.494 17:51:42 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:53.494 17:51:42 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:53.494 17:51:42 -- setup/hugepages.sh@160 -- # setup output 00:04:53.494 17:51:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.494 17:51:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:54.903 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.903 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:54.903 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.903 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.903 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.903 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.903 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.903 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.903 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.903 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.904 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.904 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.904 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.904 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.904 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.904 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.904 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.904 17:51:43 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:54.904 17:51:43 -- setup/hugepages.sh@89 -- # local node 00:04:54.904 17:51:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.904 17:51:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.904 17:51:43 -- setup/hugepages.sh@92 -- # local surp 00:04:54.904 17:51:43 -- setup/hugepages.sh@93 -- # local resv 00:04:54.904 17:51:43 -- setup/hugepages.sh@94 -- # local anon 00:04:54.904 17:51:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.904 17:51:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.904 17:51:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.904 17:51:43 -- setup/common.sh@18 -- # local node= 00:04:54.904 17:51:43 -- setup/common.sh@19 -- # local var val 00:04:54.904 17:51:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.904 17:51:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.904 17:51:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.904 17:51:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.904 17:51:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.904 17:51:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24166876 kB' 'MemAvailable: 28377884 kB' 'Buffers: 2696 kB' 'Cached: 15004504 kB' 'SwapCached: 0 kB' 'Active: 11769740 kB' 'Inactive: 3691820 kB' 'Active(anon): 11144780 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457572 kB' 'Mapped: 209480 kB' 'Shmem: 10690420 kB' 'KReclaimable: 440584 kB' 'Slab: 824524 kB' 'SReclaimable: 440584 kB' 'SUnreclaim: 383940 kB' 'KernelStack: 12608 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352336 kB' 'Committed_AS: 12321700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197056 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.904 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.904 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.905 17:51:43 -- setup/common.sh@33 -- # echo 0 00:04:54.905 17:51:43 -- setup/common.sh@33 -- # return 0 00:04:54.905 17:51:43 -- setup/hugepages.sh@97 -- # anon=0 00:04:54.905 17:51:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.905 17:51:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.905 17:51:43 -- setup/common.sh@18 -- # local node= 00:04:54.905 17:51:43 -- setup/common.sh@19 -- # local var val 00:04:54.905 17:51:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.905 17:51:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.905 17:51:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.905 17:51:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.905 17:51:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.905 17:51:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24166920 kB' 'MemAvailable: 28377928 kB' 'Buffers: 2696 kB' 'Cached: 15004508 kB' 'SwapCached: 0 kB' 'Active: 11769400 kB' 'Inactive: 3691820 kB' 'Active(anon): 11144440 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457276 kB' 'Mapped: 209436 kB' 'Shmem: 10690424 kB' 'KReclaimable: 440584 kB' 'Slab: 824520 kB' 'SReclaimable: 440584 kB' 'SUnreclaim: 383936 kB' 'KernelStack: 12592 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352336 kB' 'Committed_AS: 12321712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197024 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.905 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.905 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 17:51:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.906 17:51:43 -- setup/common.sh@33 -- # echo 0 00:04:54.906 17:51:43 -- setup/common.sh@33 -- # return 0 00:04:54.906 17:51:43 -- setup/hugepages.sh@99 -- # surp=0 00:04:54.906 17:51:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.906 17:51:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.906 17:51:43 -- setup/common.sh@18 -- # local node= 00:04:54.906 17:51:43 -- setup/common.sh@19 -- # local var val 00:04:54.906 17:51:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.906 17:51:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.906 17:51:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.906 17:51:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.906 17:51:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.906 17:51:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24165912 kB' 'MemAvailable: 28376920 kB' 'Buffers: 2696 kB' 'Cached: 15004520 kB' 'SwapCached: 0 kB' 'Active: 11769348 kB' 'Inactive: 3691820 kB' 'Active(anon): 11144388 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457216 kB' 'Mapped: 209436 kB' 'Shmem: 10690436 kB' 'KReclaimable: 440584 kB' 'Slab: 824520 kB' 'SReclaimable: 440584 kB' 'SUnreclaim: 383936 kB' 'KernelStack: 12592 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352336 kB' 'Committed_AS: 12321728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197024 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.907 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.908 17:51:43 -- setup/common.sh@33 -- # echo 0 00:04:54.908 17:51:43 -- setup/common.sh@33 -- # return 0 00:04:54.908 17:51:43 -- setup/hugepages.sh@100 -- # resv=0 00:04:54.908 17:51:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:54.908 nr_hugepages=1025 00:04:54.908 17:51:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.908 resv_hugepages=0 00:04:54.908 17:51:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.908 surplus_hugepages=0 00:04:54.908 17:51:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.908 anon_hugepages=0 00:04:54.908 17:51:43 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:54.908 17:51:43 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:54.908 17:51:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.908 17:51:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.908 17:51:43 -- setup/common.sh@18 -- # local node= 00:04:54.908 17:51:43 -- setup/common.sh@19 -- # local var val 00:04:54.908 17:51:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.908 17:51:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.908 17:51:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.908 17:51:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.908 17:51:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.908 17:51:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24166396 kB' 'MemAvailable: 28377404 kB' 'Buffers: 2696 kB' 'Cached: 15004532 kB' 'SwapCached: 0 kB' 'Active: 11769148 kB' 'Inactive: 3691820 kB' 'Active(anon): 11144188 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456960 kB' 'Mapped: 209436 kB' 'Shmem: 10690448 kB' 'KReclaimable: 440584 kB' 'Slab: 824520 kB' 'SReclaimable: 440584 kB' 'SUnreclaim: 383936 kB' 'KernelStack: 12576 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352336 kB' 'Committed_AS: 12321744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197024 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 17:51:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.909 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.909 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.910 17:51:43 -- setup/common.sh@33 -- # echo 1025 00:04:54.910 17:51:43 -- setup/common.sh@33 -- # return 0 00:04:54.910 17:51:43 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:54.910 17:51:43 -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.910 17:51:43 -- setup/hugepages.sh@27 -- # local node 00:04:54.910 17:51:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.910 17:51:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:54.910 17:51:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.910 17:51:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:54.910 17:51:43 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:54.910 17:51:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.910 17:51:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.910 17:51:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.910 17:51:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.910 17:51:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.910 17:51:43 -- setup/common.sh@18 -- # local node=0 00:04:54.910 17:51:43 -- setup/common.sh@19 -- # local var val 00:04:54.910 17:51:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.910 17:51:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.910 17:51:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.910 17:51:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.910 17:51:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.910 17:51:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 18663552 kB' 'MemUsed: 5908804 kB' 'SwapCached: 0 kB' 'Active: 3774124 kB' 'Inactive: 167944 kB' 'Active(anon): 3443180 kB' 'Inactive(anon): 0 kB' 'Active(file): 330944 kB' 'Inactive(file): 167944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3711144 kB' 'Mapped: 165480 kB' 'AnonPages: 234004 kB' 'Shmem: 3212256 kB' 'KernelStack: 7416 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 294568 kB' 'Slab: 484540 kB' 'SReclaimable: 294568 kB' 'SUnreclaim: 189972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.910 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@33 -- # echo 0 00:04:54.911 17:51:43 -- setup/common.sh@33 -- # return 0 00:04:54.911 17:51:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.911 17:51:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.911 17:51:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.911 17:51:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:54.911 17:51:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.911 17:51:43 -- setup/common.sh@18 -- # local node=1 00:04:54.911 17:51:43 -- setup/common.sh@19 -- # local var val 00:04:54.911 17:51:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.911 17:51:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.911 17:51:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:54.911 17:51:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:54.911 17:51:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.911 17:51:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454312 kB' 'MemFree: 5508184 kB' 'MemUsed: 13946128 kB' 'SwapCached: 0 kB' 'Active: 7995344 kB' 'Inactive: 3523876 kB' 'Active(anon): 7701328 kB' 'Inactive(anon): 0 kB' 'Active(file): 294016 kB' 'Inactive(file): 3523876 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11296116 kB' 'Mapped: 43956 kB' 'AnonPages: 223268 kB' 'Shmem: 7478224 kB' 'KernelStack: 5160 kB' 'PageTables: 4812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 146016 kB' 'Slab: 339980 kB' 'SReclaimable: 146016 kB' 'SUnreclaim: 193964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.911 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # continue 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 17:51:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 17:51:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.912 17:51:43 -- setup/common.sh@33 -- # echo 0 00:04:54.912 17:51:43 -- setup/common.sh@33 -- # return 0 00:04:54.912 17:51:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.912 17:51:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.912 17:51:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.912 17:51:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.912 17:51:43 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:54.912 node0=512 expecting 513 00:04:54.912 17:51:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.912 17:51:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.912 17:51:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.912 17:51:43 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:54.912 node1=513 expecting 512 00:04:54.912 17:51:43 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:54.912 00:04:54.912 real 0m1.525s 00:04:54.912 user 0m0.629s 00:04:54.912 sys 0m0.864s 00:04:54.912 17:51:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.912 17:51:43 -- common/autotest_common.sh@10 -- # set +x 00:04:54.912 ************************************ 00:04:54.912 END TEST odd_alloc 00:04:54.912 ************************************ 00:04:54.912 17:51:43 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:54.912 17:51:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.912 17:51:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.912 17:51:43 -- common/autotest_common.sh@10 -- # set +x 00:04:55.170 ************************************ 00:04:55.170 START TEST custom_alloc 00:04:55.170 ************************************ 00:04:55.170 17:51:43 -- common/autotest_common.sh@1111 -- # custom_alloc 00:04:55.170 17:51:43 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:55.170 17:51:43 -- setup/hugepages.sh@169 -- # local node 00:04:55.170 17:51:43 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:55.170 17:51:43 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:55.170 17:51:43 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:55.170 17:51:43 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:55.170 17:51:43 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:55.170 17:51:43 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.170 17:51:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.170 17:51:43 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:55.170 17:51:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.170 17:51:43 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.170 17:51:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.170 17:51:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:55.170 17:51:43 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.170 17:51:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.170 17:51:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.170 17:51:43 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.170 17:51:43 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:55.170 17:51:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.170 17:51:43 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:55.170 17:51:43 -- setup/hugepages.sh@83 -- # : 256 00:04:55.170 17:51:43 -- setup/hugepages.sh@84 -- # : 1 00:04:55.170 17:51:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.170 17:51:43 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:55.170 17:51:43 -- setup/hugepages.sh@83 -- # : 0 00:04:55.170 17:51:43 -- setup/hugepages.sh@84 -- # : 0 00:04:55.170 17:51:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.170 17:51:43 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:55.170 17:51:43 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:55.170 17:51:43 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:55.170 17:51:43 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:55.170 17:51:43 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.170 17:51:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.170 17:51:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:55.170 17:51:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.170 17:51:43 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.170 17:51:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.170 17:51:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.170 17:51:43 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.170 17:51:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.170 17:51:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.170 17:51:43 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.170 17:51:43 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:55.170 17:51:43 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.170 17:51:43 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:55.170 17:51:43 -- setup/hugepages.sh@78 -- # return 0 00:04:55.170 17:51:43 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:55.170 17:51:43 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:55.170 17:51:43 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:55.170 17:51:43 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:55.170 17:51:43 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:55.170 17:51:43 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:55.170 17:51:43 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:55.170 17:51:43 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:55.170 17:51:43 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.170 17:51:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.170 17:51:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.170 17:51:43 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.170 17:51:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.170 17:51:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.170 17:51:43 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.170 17:51:43 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:55.170 17:51:43 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.170 17:51:43 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:55.170 17:51:43 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.170 17:51:43 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:55.170 17:51:43 -- setup/hugepages.sh@78 -- # return 0 00:04:55.170 17:51:43 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:55.170 17:51:43 -- setup/hugepages.sh@187 -- # setup output 00:04:55.170 17:51:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.170 17:51:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.549 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.549 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:56.549 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.549 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.549 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.549 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.549 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.549 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.549 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.549 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.549 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.549 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.549 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.550 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.550 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.550 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.550 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.550 17:51:45 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:56.550 17:51:45 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:56.550 17:51:45 -- setup/hugepages.sh@89 -- # local node 00:04:56.550 17:51:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.550 17:51:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.550 17:51:45 -- setup/hugepages.sh@92 -- # local surp 00:04:56.550 17:51:45 -- setup/hugepages.sh@93 -- # local resv 00:04:56.550 17:51:45 -- setup/hugepages.sh@94 -- # local anon 00:04:56.550 17:51:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.550 17:51:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.550 17:51:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.550 17:51:45 -- setup/common.sh@18 -- # local node= 00:04:56.550 17:51:45 -- setup/common.sh@19 -- # local var val 00:04:56.550 17:51:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.550 17:51:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.550 17:51:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.550 17:51:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.550 17:51:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.550 17:51:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 23146668 kB' 'MemAvailable: 27357676 kB' 'Buffers: 2696 kB' 'Cached: 15004608 kB' 'SwapCached: 0 kB' 'Active: 11770336 kB' 'Inactive: 3691820 kB' 'Active(anon): 11145376 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458068 kB' 'Mapped: 209496 kB' 'Shmem: 10690524 kB' 'KReclaimable: 440584 kB' 'Slab: 824140 kB' 'SReclaimable: 440584 kB' 'SUnreclaim: 383556 kB' 'KernelStack: 12560 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829072 kB' 'Committed_AS: 12321700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197024 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.550 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.550 17:51:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.551 17:51:45 -- setup/common.sh@33 -- # echo 0 00:04:56.551 17:51:45 -- setup/common.sh@33 -- # return 0 00:04:56.551 17:51:45 -- setup/hugepages.sh@97 -- # anon=0 00:04:56.551 17:51:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.551 17:51:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.551 17:51:45 -- setup/common.sh@18 -- # local node= 00:04:56.551 17:51:45 -- setup/common.sh@19 -- # local var val 00:04:56.551 17:51:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.551 17:51:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.551 17:51:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.551 17:51:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.551 17:51:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.551 17:51:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 23146428 kB' 'MemAvailable: 27357436 kB' 'Buffers: 2696 kB' 'Cached: 15004616 kB' 'SwapCached: 0 kB' 'Active: 11770372 kB' 'Inactive: 3691820 kB' 'Active(anon): 11145412 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458152 kB' 'Mapped: 209476 kB' 'Shmem: 10690532 kB' 'KReclaimable: 440584 kB' 'Slab: 824132 kB' 'SReclaimable: 440584 kB' 'SUnreclaim: 383548 kB' 'KernelStack: 12592 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829072 kB' 'Committed_AS: 12322080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196992 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.551 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.551 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.552 17:51:45 -- setup/common.sh@33 -- # echo 0 00:04:56.552 17:51:45 -- setup/common.sh@33 -- # return 0 00:04:56.552 17:51:45 -- setup/hugepages.sh@99 -- # surp=0 00:04:56.552 17:51:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.552 17:51:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.552 17:51:45 -- setup/common.sh@18 -- # local node= 00:04:56.552 17:51:45 -- setup/common.sh@19 -- # local var val 00:04:56.552 17:51:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.552 17:51:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.552 17:51:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.552 17:51:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.552 17:51:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.552 17:51:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 23146780 kB' 'MemAvailable: 27357788 kB' 'Buffers: 2696 kB' 'Cached: 15004628 kB' 'SwapCached: 0 kB' 'Active: 11770304 kB' 'Inactive: 3691820 kB' 'Active(anon): 11145344 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458072 kB' 'Mapped: 209476 kB' 'Shmem: 10690544 kB' 'KReclaimable: 440584 kB' 'Slab: 824148 kB' 'SReclaimable: 440584 kB' 'SUnreclaim: 383564 kB' 'KernelStack: 12592 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829072 kB' 'Committed_AS: 12322096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196992 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.552 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.552 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.553 17:51:45 -- setup/common.sh@33 -- # echo 0 00:04:56.553 17:51:45 -- setup/common.sh@33 -- # return 0 00:04:56.553 17:51:45 -- setup/hugepages.sh@100 -- # resv=0 00:04:56.553 17:51:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:56.553 nr_hugepages=1536 00:04:56.553 17:51:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.553 resv_hugepages=0 00:04:56.553 17:51:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.553 surplus_hugepages=0 00:04:56.553 17:51:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.553 anon_hugepages=0 00:04:56.553 17:51:45 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:56.553 17:51:45 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:56.553 17:51:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.553 17:51:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.553 17:51:45 -- setup/common.sh@18 -- # local node= 00:04:56.553 17:51:45 -- setup/common.sh@19 -- # local var val 00:04:56.553 17:51:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.553 17:51:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.553 17:51:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.553 17:51:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.553 17:51:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.553 17:51:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.553 17:51:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 23146832 kB' 'MemAvailable: 27357840 kB' 'Buffers: 2696 kB' 'Cached: 15004640 kB' 'SwapCached: 0 kB' 'Active: 11770304 kB' 'Inactive: 3691820 kB' 'Active(anon): 11145344 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458068 kB' 'Mapped: 209476 kB' 'Shmem: 10690556 kB' 'KReclaimable: 440584 kB' 'Slab: 824148 kB' 'SReclaimable: 440584 kB' 'SUnreclaim: 383564 kB' 'KernelStack: 12592 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829072 kB' 'Committed_AS: 12322112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196992 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.553 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.553 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.554 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.554 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.555 17:51:45 -- setup/common.sh@33 -- # echo 1536 00:04:56.555 17:51:45 -- setup/common.sh@33 -- # return 0 00:04:56.555 17:51:45 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:56.555 17:51:45 -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.555 17:51:45 -- setup/hugepages.sh@27 -- # local node 00:04:56.555 17:51:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.555 17:51:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.555 17:51:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.555 17:51:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:56.555 17:51:45 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:56.555 17:51:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.555 17:51:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.555 17:51:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.555 17:51:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.555 17:51:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.555 17:51:45 -- setup/common.sh@18 -- # local node=0 00:04:56.555 17:51:45 -- setup/common.sh@19 -- # local var val 00:04:56.555 17:51:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.555 17:51:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.555 17:51:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.555 17:51:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.555 17:51:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.555 17:51:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 18683924 kB' 'MemUsed: 5888432 kB' 'SwapCached: 0 kB' 'Active: 3775244 kB' 'Inactive: 167944 kB' 'Active(anon): 3444300 kB' 'Inactive(anon): 0 kB' 'Active(file): 330944 kB' 'Inactive(file): 167944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3711156 kB' 'Mapped: 165496 kB' 'AnonPages: 235228 kB' 'Shmem: 3212268 kB' 'KernelStack: 7432 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 294568 kB' 'Slab: 484340 kB' 'SReclaimable: 294568 kB' 'SUnreclaim: 189772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.555 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.555 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@33 -- # echo 0 00:04:56.556 17:51:45 -- setup/common.sh@33 -- # return 0 00:04:56.556 17:51:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.556 17:51:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.556 17:51:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.556 17:51:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:56.556 17:51:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.556 17:51:45 -- setup/common.sh@18 -- # local node=1 00:04:56.556 17:51:45 -- setup/common.sh@19 -- # local var val 00:04:56.556 17:51:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.556 17:51:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.556 17:51:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:56.556 17:51:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:56.556 17:51:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.556 17:51:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454312 kB' 'MemFree: 4464624 kB' 'MemUsed: 14989688 kB' 'SwapCached: 0 kB' 'Active: 7994916 kB' 'Inactive: 3523876 kB' 'Active(anon): 7700900 kB' 'Inactive(anon): 0 kB' 'Active(file): 294016 kB' 'Inactive(file): 3523876 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11296196 kB' 'Mapped: 43980 kB' 'AnonPages: 222656 kB' 'Shmem: 7478304 kB' 'KernelStack: 5128 kB' 'PageTables: 4672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 146016 kB' 'Slab: 339808 kB' 'SReclaimable: 146016 kB' 'SUnreclaim: 193792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.556 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.556 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.815 17:51:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.815 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.815 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.815 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.815 17:51:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.815 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.815 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.815 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.815 17:51:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.815 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.815 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.815 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.815 17:51:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.815 17:51:45 -- setup/common.sh@32 -- # continue 00:04:56.815 17:51:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.815 17:51:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.815 17:51:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.815 17:51:45 -- setup/common.sh@33 -- # echo 0 00:04:56.815 17:51:45 -- setup/common.sh@33 -- # return 0 00:04:56.815 17:51:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.815 17:51:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.815 17:51:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.815 17:51:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.815 17:51:45 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:56.815 node0=512 expecting 512 00:04:56.815 17:51:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.815 17:51:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.815 17:51:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.815 17:51:45 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:56.815 node1=1024 expecting 1024 00:04:56.815 17:51:45 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:56.815 00:04:56.815 real 0m1.549s 00:04:56.815 user 0m0.643s 00:04:56.815 sys 0m0.876s 00:04:56.815 17:51:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:56.815 17:51:45 -- common/autotest_common.sh@10 -- # set +x 00:04:56.815 ************************************ 00:04:56.815 END TEST custom_alloc 00:04:56.815 ************************************ 00:04:56.815 17:51:45 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:56.815 17:51:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.815 17:51:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.815 17:51:45 -- common/autotest_common.sh@10 -- # set +x 00:04:56.815 ************************************ 00:04:56.815 START TEST no_shrink_alloc 00:04:56.815 ************************************ 00:04:56.815 17:51:45 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:04:56.815 17:51:45 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:56.815 17:51:45 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:56.815 17:51:45 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:56.815 17:51:45 -- setup/hugepages.sh@51 -- # shift 00:04:56.815 17:51:45 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:56.815 17:51:45 -- setup/hugepages.sh@52 -- # local node_ids 00:04:56.815 17:51:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.815 17:51:45 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:56.815 17:51:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:56.815 17:51:45 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:56.815 17:51:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.815 17:51:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.815 17:51:45 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.815 17:51:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.815 17:51:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.815 17:51:45 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:56.815 17:51:45 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:56.815 17:51:45 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:56.815 17:51:45 -- setup/hugepages.sh@73 -- # return 0 00:04:56.815 17:51:45 -- setup/hugepages.sh@198 -- # setup output 00:04:56.815 17:51:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.815 17:51:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:58.190 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:58.190 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:58.190 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:58.190 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:58.190 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:58.190 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:58.191 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:58.191 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:58.191 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:58.191 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:58.191 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:58.191 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:58.191 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:58.191 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:58.191 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:58.191 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:58.191 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:58.191 17:51:47 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:58.191 17:51:47 -- setup/hugepages.sh@89 -- # local node 00:04:58.191 17:51:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:58.191 17:51:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:58.191 17:51:47 -- setup/hugepages.sh@92 -- # local surp 00:04:58.191 17:51:47 -- setup/hugepages.sh@93 -- # local resv 00:04:58.191 17:51:47 -- setup/hugepages.sh@94 -- # local anon 00:04:58.191 17:51:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:58.191 17:51:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:58.191 17:51:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:58.191 17:51:47 -- setup/common.sh@18 -- # local node= 00:04:58.191 17:51:47 -- setup/common.sh@19 -- # local var val 00:04:58.191 17:51:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.191 17:51:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.191 17:51:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.191 17:51:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.191 17:51:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.191 17:51:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24175404 kB' 'MemAvailable: 28386412 kB' 'Buffers: 2696 kB' 'Cached: 15004700 kB' 'SwapCached: 0 kB' 'Active: 11770620 kB' 'Inactive: 3691820 kB' 'Active(anon): 11145660 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458292 kB' 'Mapped: 209500 kB' 'Shmem: 10690616 kB' 'KReclaimable: 440584 kB' 'Slab: 824148 kB' 'SReclaimable: 440584 kB' 'SUnreclaim: 383564 kB' 'KernelStack: 12608 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12322160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197056 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.453 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.453 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.454 17:51:47 -- setup/common.sh@33 -- # echo 0 00:04:58.454 17:51:47 -- setup/common.sh@33 -- # return 0 00:04:58.454 17:51:47 -- setup/hugepages.sh@97 -- # anon=0 00:04:58.454 17:51:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:58.454 17:51:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.454 17:51:47 -- setup/common.sh@18 -- # local node= 00:04:58.454 17:51:47 -- setup/common.sh@19 -- # local var val 00:04:58.454 17:51:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.454 17:51:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.454 17:51:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.454 17:51:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.454 17:51:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.454 17:51:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24176140 kB' 'MemAvailable: 28387148 kB' 'Buffers: 2696 kB' 'Cached: 15004704 kB' 'SwapCached: 0 kB' 'Active: 11770788 kB' 'Inactive: 3691820 kB' 'Active(anon): 11145828 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458484 kB' 'Mapped: 209500 kB' 'Shmem: 10690620 kB' 'KReclaimable: 440584 kB' 'Slab: 824128 kB' 'SReclaimable: 440584 kB' 'SUnreclaim: 383544 kB' 'KernelStack: 12576 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12322172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197024 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.454 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.454 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.455 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.455 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.456 17:51:47 -- setup/common.sh@33 -- # echo 0 00:04:58.456 17:51:47 -- setup/common.sh@33 -- # return 0 00:04:58.456 17:51:47 -- setup/hugepages.sh@99 -- # surp=0 00:04:58.456 17:51:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.456 17:51:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.456 17:51:47 -- setup/common.sh@18 -- # local node= 00:04:58.456 17:51:47 -- setup/common.sh@19 -- # local var val 00:04:58.456 17:51:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.456 17:51:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.456 17:51:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.456 17:51:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.456 17:51:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.456 17:51:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24186484 kB' 'MemAvailable: 28397492 kB' 'Buffers: 2696 kB' 'Cached: 15004720 kB' 'SwapCached: 0 kB' 'Active: 11770496 kB' 'Inactive: 3691820 kB' 'Active(anon): 11145536 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458132 kB' 'Mapped: 209496 kB' 'Shmem: 10690636 kB' 'KReclaimable: 440584 kB' 'Slab: 824152 kB' 'SReclaimable: 440584 kB' 'SUnreclaim: 383568 kB' 'KernelStack: 12608 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12322188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197008 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.456 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.456 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.457 17:51:47 -- setup/common.sh@33 -- # echo 0 00:04:58.457 17:51:47 -- setup/common.sh@33 -- # return 0 00:04:58.457 17:51:47 -- setup/hugepages.sh@100 -- # resv=0 00:04:58.457 17:51:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:58.457 nr_hugepages=1024 00:04:58.457 17:51:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.457 resv_hugepages=0 00:04:58.457 17:51:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.457 surplus_hugepages=0 00:04:58.457 17:51:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.457 anon_hugepages=0 00:04:58.457 17:51:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.457 17:51:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:58.457 17:51:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.457 17:51:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.457 17:51:47 -- setup/common.sh@18 -- # local node= 00:04:58.457 17:51:47 -- setup/common.sh@19 -- # local var val 00:04:58.457 17:51:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.457 17:51:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.457 17:51:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.457 17:51:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.457 17:51:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.457 17:51:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.457 17:51:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24186880 kB' 'MemAvailable: 28397888 kB' 'Buffers: 2696 kB' 'Cached: 15004736 kB' 'SwapCached: 0 kB' 'Active: 11770520 kB' 'Inactive: 3691820 kB' 'Active(anon): 11145560 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458160 kB' 'Mapped: 209496 kB' 'Shmem: 10690652 kB' 'KReclaimable: 440584 kB' 'Slab: 824136 kB' 'SReclaimable: 440584 kB' 'SUnreclaim: 383552 kB' 'KernelStack: 12624 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12322204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197008 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.457 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.457 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.458 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.458 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.459 17:51:47 -- setup/common.sh@33 -- # echo 1024 00:04:58.459 17:51:47 -- setup/common.sh@33 -- # return 0 00:04:58.459 17:51:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.459 17:51:47 -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.459 17:51:47 -- setup/hugepages.sh@27 -- # local node 00:04:58.459 17:51:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.459 17:51:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:58.459 17:51:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.459 17:51:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:58.459 17:51:47 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:58.459 17:51:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.459 17:51:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.459 17:51:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.459 17:51:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.459 17:51:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.459 17:51:47 -- setup/common.sh@18 -- # local node=0 00:04:58.459 17:51:47 -- setup/common.sh@19 -- # local var val 00:04:58.459 17:51:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.459 17:51:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.459 17:51:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.459 17:51:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.459 17:51:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.459 17:51:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 17625724 kB' 'MemUsed: 6946632 kB' 'SwapCached: 0 kB' 'Active: 3776212 kB' 'Inactive: 167944 kB' 'Active(anon): 3445268 kB' 'Inactive(anon): 0 kB' 'Active(file): 330944 kB' 'Inactive(file): 167944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3711200 kB' 'Mapped: 165484 kB' 'AnonPages: 236156 kB' 'Shmem: 3212312 kB' 'KernelStack: 7512 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 294568 kB' 'Slab: 484360 kB' 'SReclaimable: 294568 kB' 'SUnreclaim: 189792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.459 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.459 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # continue 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.460 17:51:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.460 17:51:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.460 17:51:47 -- setup/common.sh@33 -- # echo 0 00:04:58.460 17:51:47 -- setup/common.sh@33 -- # return 0 00:04:58.460 17:51:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.460 17:51:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.460 17:51:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.460 17:51:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.460 17:51:47 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:58.460 node0=1024 expecting 1024 00:04:58.460 17:51:47 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:58.460 17:51:47 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:58.460 17:51:47 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:58.460 17:51:47 -- setup/hugepages.sh@202 -- # setup output 00:04:58.460 17:51:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.460 17:51:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.837 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.837 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:59.837 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.837 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.837 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.837 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.837 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.837 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.837 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.837 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.837 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.837 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.837 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.837 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.837 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.837 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.837 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.837 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:59.837 17:51:48 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:59.837 17:51:48 -- setup/hugepages.sh@89 -- # local node 00:04:59.837 17:51:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.837 17:51:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.837 17:51:48 -- setup/hugepages.sh@92 -- # local surp 00:04:59.837 17:51:48 -- setup/hugepages.sh@93 -- # local resv 00:04:59.837 17:51:48 -- setup/hugepages.sh@94 -- # local anon 00:04:59.837 17:51:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.837 17:51:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.837 17:51:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.837 17:51:48 -- setup/common.sh@18 -- # local node= 00:04:59.837 17:51:48 -- setup/common.sh@19 -- # local var val 00:04:59.837 17:51:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.837 17:51:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.837 17:51:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.837 17:51:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.837 17:51:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.837 17:51:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24186504 kB' 'MemAvailable: 28397512 kB' 'Buffers: 2696 kB' 'Cached: 15004784 kB' 'SwapCached: 0 kB' 'Active: 11770812 kB' 'Inactive: 3691820 kB' 'Active(anon): 11145852 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458324 kB' 'Mapped: 209520 kB' 'Shmem: 10690700 kB' 'KReclaimable: 440584 kB' 'Slab: 824076 kB' 'SReclaimable: 440584 kB' 'SUnreclaim: 383492 kB' 'KernelStack: 12608 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12322060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197088 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.837 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.837 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.838 17:51:48 -- setup/common.sh@33 -- # echo 0 00:04:59.838 17:51:48 -- setup/common.sh@33 -- # return 0 00:04:59.838 17:51:48 -- setup/hugepages.sh@97 -- # anon=0 00:04:59.838 17:51:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.838 17:51:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.838 17:51:48 -- setup/common.sh@18 -- # local node= 00:04:59.838 17:51:48 -- setup/common.sh@19 -- # local var val 00:04:59.838 17:51:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.838 17:51:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.838 17:51:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.838 17:51:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.838 17:51:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.838 17:51:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24186700 kB' 'MemAvailable: 28397708 kB' 'Buffers: 2696 kB' 'Cached: 15004784 kB' 'SwapCached: 0 kB' 'Active: 11770464 kB' 'Inactive: 3691820 kB' 'Active(anon): 11145504 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458084 kB' 'Mapped: 209504 kB' 'Shmem: 10690700 kB' 'KReclaimable: 440584 kB' 'Slab: 824044 kB' 'SReclaimable: 440584 kB' 'SUnreclaim: 383460 kB' 'KernelStack: 12624 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12322072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197056 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.838 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.838 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.839 17:51:48 -- setup/common.sh@33 -- # echo 0 00:04:59.839 17:51:48 -- setup/common.sh@33 -- # return 0 00:04:59.839 17:51:48 -- setup/hugepages.sh@99 -- # surp=0 00:04:59.839 17:51:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.839 17:51:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.839 17:51:48 -- setup/common.sh@18 -- # local node= 00:04:59.839 17:51:48 -- setup/common.sh@19 -- # local var val 00:04:59.839 17:51:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.839 17:51:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.839 17:51:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.839 17:51:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.839 17:51:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.839 17:51:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24186452 kB' 'MemAvailable: 28397460 kB' 'Buffers: 2696 kB' 'Cached: 15004800 kB' 'SwapCached: 0 kB' 'Active: 11770656 kB' 'Inactive: 3691820 kB' 'Active(anon): 11145696 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458264 kB' 'Mapped: 209504 kB' 'Shmem: 10690716 kB' 'KReclaimable: 440584 kB' 'Slab: 824148 kB' 'SReclaimable: 440584 kB' 'SUnreclaim: 383564 kB' 'KernelStack: 12624 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12322088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197040 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.839 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.839 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.101 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.101 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.102 17:51:48 -- setup/common.sh@33 -- # echo 0 00:05:00.102 17:51:48 -- setup/common.sh@33 -- # return 0 00:05:00.102 17:51:48 -- setup/hugepages.sh@100 -- # resv=0 00:05:00.102 17:51:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:00.102 nr_hugepages=1024 00:05:00.102 17:51:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.102 resv_hugepages=0 00:05:00.102 17:51:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.102 surplus_hugepages=0 00:05:00.102 17:51:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.102 anon_hugepages=0 00:05:00.102 17:51:48 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.102 17:51:48 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:00.102 17:51:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.102 17:51:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.102 17:51:48 -- setup/common.sh@18 -- # local node= 00:05:00.102 17:51:48 -- setup/common.sh@19 -- # local var val 00:05:00.102 17:51:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.102 17:51:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.102 17:51:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.102 17:51:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.102 17:51:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.102 17:51:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 24186512 kB' 'MemAvailable: 28397520 kB' 'Buffers: 2696 kB' 'Cached: 15004816 kB' 'SwapCached: 0 kB' 'Active: 11770676 kB' 'Inactive: 3691820 kB' 'Active(anon): 11145716 kB' 'Inactive(anon): 0 kB' 'Active(file): 624960 kB' 'Inactive(file): 3691820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458268 kB' 'Mapped: 209504 kB' 'Shmem: 10690732 kB' 'KReclaimable: 440584 kB' 'Slab: 824148 kB' 'SReclaimable: 440584 kB' 'SUnreclaim: 383564 kB' 'KernelStack: 12624 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 12322104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197040 kB' 'VmallocChunk: 0 kB' 'Percpu: 55872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1883740 kB' 'DirectMap2M: 19007488 kB' 'DirectMap1G: 31457280 kB' 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.102 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.103 17:51:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.103 17:51:48 -- setup/common.sh@33 -- # echo 1024 00:05:00.103 17:51:48 -- setup/common.sh@33 -- # return 0 00:05:00.103 17:51:48 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.103 17:51:48 -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.103 17:51:48 -- setup/hugepages.sh@27 -- # local node 00:05:00.103 17:51:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.103 17:51:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:00.103 17:51:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.103 17:51:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:00.103 17:51:48 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:00.103 17:51:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.103 17:51:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.103 17:51:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.103 17:51:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.103 17:51:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.103 17:51:48 -- setup/common.sh@18 -- # local node=0 00:05:00.103 17:51:48 -- setup/common.sh@19 -- # local var val 00:05:00.103 17:51:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.103 17:51:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.103 17:51:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.103 17:51:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.103 17:51:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.103 17:51:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.103 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 17628824 kB' 'MemUsed: 6943532 kB' 'SwapCached: 0 kB' 'Active: 3775568 kB' 'Inactive: 167944 kB' 'Active(anon): 3444624 kB' 'Inactive(anon): 0 kB' 'Active(file): 330944 kB' 'Inactive(file): 167944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3711200 kB' 'Mapped: 165484 kB' 'AnonPages: 235044 kB' 'Shmem: 3212312 kB' 'KernelStack: 7464 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 294568 kB' 'Slab: 484296 kB' 'SReclaimable: 294568 kB' 'SUnreclaim: 189728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # continue 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.104 17:51:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.104 17:51:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.104 17:51:48 -- setup/common.sh@33 -- # echo 0 00:05:00.104 17:51:48 -- setup/common.sh@33 -- # return 0 00:05:00.104 17:51:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.104 17:51:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.104 17:51:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.104 17:51:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.104 17:51:48 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:00.104 node0=1024 expecting 1024 00:05:00.104 17:51:48 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:00.104 00:05:00.104 real 0m3.240s 00:05:00.104 user 0m1.312s 00:05:00.104 sys 0m1.879s 00:05:00.105 17:51:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:00.105 17:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:00.105 ************************************ 00:05:00.105 END TEST no_shrink_alloc 00:05:00.105 ************************************ 00:05:00.105 17:51:48 -- setup/hugepages.sh@217 -- # clear_hp 00:05:00.105 17:51:48 -- setup/hugepages.sh@37 -- # local node hp 00:05:00.105 17:51:48 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:00.105 17:51:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.105 17:51:48 -- setup/hugepages.sh@41 -- # echo 0 00:05:00.105 17:51:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.105 17:51:48 -- setup/hugepages.sh@41 -- # echo 0 00:05:00.105 17:51:48 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:00.105 17:51:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.105 17:51:48 -- setup/hugepages.sh@41 -- # echo 0 00:05:00.105 17:51:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.105 17:51:48 -- setup/hugepages.sh@41 -- # echo 0 00:05:00.105 17:51:48 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:00.105 17:51:48 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:00.105 00:05:00.105 real 0m13.124s 00:05:00.105 user 0m5.100s 00:05:00.105 sys 0m6.820s 00:05:00.105 17:51:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:00.105 17:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:00.105 ************************************ 00:05:00.105 END TEST hugepages 00:05:00.105 ************************************ 00:05:00.105 17:51:48 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:00.105 17:51:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.105 17:51:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.105 17:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:00.363 ************************************ 00:05:00.363 START TEST driver 00:05:00.363 ************************************ 00:05:00.363 17:51:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:00.363 * Looking for test storage... 00:05:00.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:00.363 17:51:49 -- setup/driver.sh@68 -- # setup reset 00:05:00.363 17:51:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:00.363 17:51:49 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:02.900 17:51:51 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:02.900 17:51:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.900 17:51:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.900 17:51:51 -- common/autotest_common.sh@10 -- # set +x 00:05:02.900 ************************************ 00:05:02.900 START TEST guess_driver 00:05:02.900 ************************************ 00:05:02.900 17:51:51 -- common/autotest_common.sh@1111 -- # guess_driver 00:05:02.900 17:51:51 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:02.900 17:51:51 -- setup/driver.sh@47 -- # local fail=0 00:05:02.900 17:51:51 -- setup/driver.sh@49 -- # pick_driver 00:05:02.900 17:51:51 -- setup/driver.sh@36 -- # vfio 00:05:02.900 17:51:51 -- setup/driver.sh@21 -- # local iommu_grups 00:05:02.900 17:51:51 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:02.900 17:51:51 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:02.900 17:51:51 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:02.900 17:51:51 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:02.900 17:51:51 -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:05:02.900 17:51:51 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:02.900 17:51:51 -- setup/driver.sh@14 -- # mod vfio_pci 00:05:02.900 17:51:51 -- setup/driver.sh@12 -- # dep vfio_pci 00:05:02.900 17:51:51 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:02.900 17:51:51 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:02.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:02.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:02.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:02.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:02.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:02.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:02.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:02.900 17:51:51 -- setup/driver.sh@30 -- # return 0 00:05:02.900 17:51:51 -- setup/driver.sh@37 -- # echo vfio-pci 00:05:02.900 17:51:51 -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:02.900 17:51:51 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:02.900 17:51:51 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:02.900 Looking for driver=vfio-pci 00:05:02.900 17:51:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.900 17:51:51 -- setup/driver.sh@45 -- # setup output config 00:05:02.900 17:51:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.900 17:51:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:04.301 17:51:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.301 17:51:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.301 17:51:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.301 17:51:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.301 17:51:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.301 17:51:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.301 17:51:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.301 17:51:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.301 17:51:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.301 17:51:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.301 17:51:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.301 17:51:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.301 17:51:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.301 17:51:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.301 17:51:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.301 17:51:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.301 17:51:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.301 17:51:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.301 17:51:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.301 17:51:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.301 17:51:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.301 17:51:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.301 17:51:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.301 17:51:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.301 17:51:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.301 17:51:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.301 17:51:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.301 17:51:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.301 17:51:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.301 17:51:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.301 17:51:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.301 17:51:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.301 17:51:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.301 17:51:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.301 17:51:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.301 17:51:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.301 17:51:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.301 17:51:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.301 17:51:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.301 17:51:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.301 17:51:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.301 17:51:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.301 17:51:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.301 17:51:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.301 17:51:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.301 17:51:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.301 17:51:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.301 17:51:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.240 17:51:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.240 17:51:54 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.240 17:51:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.240 17:51:54 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:05.240 17:51:54 -- setup/driver.sh@65 -- # setup reset 00:05:05.240 17:51:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.240 17:51:54 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:07.779 00:05:07.779 real 0m4.671s 00:05:07.779 user 0m0.970s 00:05:07.779 sys 0m1.860s 00:05:07.779 17:51:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.780 17:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:07.780 ************************************ 00:05:07.780 END TEST guess_driver 00:05:07.780 ************************************ 00:05:07.780 00:05:07.780 real 0m7.451s 00:05:07.780 user 0m1.660s 00:05:07.780 sys 0m3.095s 00:05:07.780 17:51:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.780 17:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:07.780 ************************************ 00:05:07.780 END TEST driver 00:05:07.780 ************************************ 00:05:07.780 17:51:56 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:07.780 17:51:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.780 17:51:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.780 17:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:07.780 ************************************ 00:05:07.780 START TEST devices 00:05:07.780 ************************************ 00:05:07.780 17:51:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:07.780 * Looking for test storage... 00:05:07.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:07.780 17:51:56 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:07.780 17:51:56 -- setup/devices.sh@192 -- # setup reset 00:05:07.780 17:51:56 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.780 17:51:56 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:09.688 17:51:58 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:09.688 17:51:58 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:09.688 17:51:58 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:09.688 17:51:58 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:09.688 17:51:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:09.688 17:51:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:09.688 17:51:58 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:09.688 17:51:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:09.688 17:51:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:09.688 17:51:58 -- setup/devices.sh@196 -- # blocks=() 00:05:09.688 17:51:58 -- setup/devices.sh@196 -- # declare -a blocks 00:05:09.688 17:51:58 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:09.688 17:51:58 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:09.688 17:51:58 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:09.688 17:51:58 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:09.688 17:51:58 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:09.688 17:51:58 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:09.688 17:51:58 -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:05:09.688 17:51:58 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:05:09.688 17:51:58 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:09.688 17:51:58 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:09.688 17:51:58 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:09.688 No valid GPT data, bailing 00:05:09.688 17:51:58 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:09.688 17:51:58 -- scripts/common.sh@391 -- # pt= 00:05:09.688 17:51:58 -- scripts/common.sh@392 -- # return 1 00:05:09.688 17:51:58 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:09.688 17:51:58 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:09.688 17:51:58 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:09.688 17:51:58 -- setup/common.sh@80 -- # echo 1000204886016 00:05:09.688 17:51:58 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:09.688 17:51:58 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:09.688 17:51:58 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:05:09.688 17:51:58 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:09.688 17:51:58 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:09.688 17:51:58 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:09.688 17:51:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.688 17:51:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.688 17:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:09.688 ************************************ 00:05:09.688 START TEST nvme_mount 00:05:09.688 ************************************ 00:05:09.688 17:51:58 -- common/autotest_common.sh@1111 -- # nvme_mount 00:05:09.688 17:51:58 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:09.688 17:51:58 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:09.688 17:51:58 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:09.688 17:51:58 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:09.688 17:51:58 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:09.688 17:51:58 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:09.688 17:51:58 -- setup/common.sh@40 -- # local part_no=1 00:05:09.688 17:51:58 -- setup/common.sh@41 -- # local size=1073741824 00:05:09.688 17:51:58 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:09.688 17:51:58 -- setup/common.sh@44 -- # parts=() 00:05:09.688 17:51:58 -- setup/common.sh@44 -- # local parts 00:05:09.688 17:51:58 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:09.688 17:51:58 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:09.688 17:51:58 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:09.688 17:51:58 -- setup/common.sh@46 -- # (( part++ )) 00:05:09.688 17:51:58 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:09.688 17:51:58 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:09.688 17:51:58 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:09.688 17:51:58 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:11.074 Creating new GPT entries in memory. 00:05:11.074 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:11.074 other utilities. 00:05:11.074 17:51:59 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:11.074 17:51:59 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.074 17:51:59 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:11.074 17:51:59 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:11.074 17:51:59 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:12.012 Creating new GPT entries in memory. 00:05:12.012 The operation has completed successfully. 00:05:12.012 17:52:00 -- setup/common.sh@57 -- # (( part++ )) 00:05:12.012 17:52:00 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.012 17:52:00 -- setup/common.sh@62 -- # wait 3185883 00:05:12.012 17:52:00 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.012 17:52:00 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:12.012 17:52:00 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.012 17:52:00 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:12.012 17:52:00 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:12.012 17:52:00 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.012 17:52:00 -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.012 17:52:00 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:12.012 17:52:00 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:12.012 17:52:00 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.012 17:52:00 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.012 17:52:00 -- setup/devices.sh@53 -- # local found=0 00:05:12.012 17:52:00 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:12.012 17:52:00 -- setup/devices.sh@56 -- # : 00:05:12.012 17:52:00 -- setup/devices.sh@59 -- # local pci status 00:05:12.012 17:52:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.012 17:52:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:12.012 17:52:00 -- setup/devices.sh@47 -- # setup output config 00:05:12.012 17:52:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.012 17:52:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:12.950 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.950 17:52:01 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:12.950 17:52:01 -- setup/devices.sh@63 -- # found=1 00:05:12.950 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.950 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.950 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.950 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.950 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.950 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.950 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.950 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.950 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.950 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.950 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.950 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.950 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.950 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.950 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.950 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.950 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.950 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.950 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.950 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.950 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.950 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.950 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.950 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.950 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.950 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.950 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.950 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.950 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.950 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.950 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.950 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.950 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.209 17:52:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:13.209 17:52:01 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:13.209 17:52:01 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.209 17:52:01 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.209 17:52:01 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:13.209 17:52:01 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:13.209 17:52:01 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.209 17:52:01 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.209 17:52:01 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.209 17:52:01 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:13.209 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:13.209 17:52:01 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:13.209 17:52:01 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:13.473 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:13.474 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:13.474 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:13.474 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:13.474 17:52:02 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:13.474 17:52:02 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:13.474 17:52:02 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.474 17:52:02 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:13.474 17:52:02 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:13.474 17:52:02 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.474 17:52:02 -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:13.474 17:52:02 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:13.474 17:52:02 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:13.474 17:52:02 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.474 17:52:02 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:13.474 17:52:02 -- setup/devices.sh@53 -- # local found=0 00:05:13.474 17:52:02 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.474 17:52:02 -- setup/devices.sh@56 -- # : 00:05:13.474 17:52:02 -- setup/devices.sh@59 -- # local pci status 00:05:13.474 17:52:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:13.474 17:52:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.474 17:52:02 -- setup/devices.sh@47 -- # setup output config 00:05:13.474 17:52:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.474 17:52:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:14.854 17:52:03 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:14.854 17:52:03 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:14.854 17:52:03 -- setup/devices.sh@63 -- # found=1 00:05:14.854 17:52:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.854 17:52:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:14.854 17:52:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.855 17:52:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:14.855 17:52:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.855 17:52:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:14.855 17:52:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.855 17:52:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:14.855 17:52:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.855 17:52:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:14.855 17:52:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.855 17:52:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:14.855 17:52:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.855 17:52:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:14.855 17:52:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.855 17:52:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:14.855 17:52:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.855 17:52:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:14.855 17:52:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.855 17:52:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:14.855 17:52:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.855 17:52:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:14.855 17:52:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.855 17:52:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:14.855 17:52:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.855 17:52:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:14.855 17:52:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.855 17:52:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:14.855 17:52:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.855 17:52:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:14.855 17:52:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.855 17:52:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:14.855 17:52:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.855 17:52:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.855 17:52:03 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:14.855 17:52:03 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.855 17:52:03 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.855 17:52:03 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:14.855 17:52:03 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.855 17:52:03 -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:05:14.855 17:52:03 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:14.855 17:52:03 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:14.855 17:52:03 -- setup/devices.sh@50 -- # local mount_point= 00:05:14.855 17:52:03 -- setup/devices.sh@51 -- # local test_file= 00:05:14.855 17:52:03 -- setup/devices.sh@53 -- # local found=0 00:05:14.855 17:52:03 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:14.855 17:52:03 -- setup/devices.sh@59 -- # local pci status 00:05:14.855 17:52:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.855 17:52:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:14.855 17:52:03 -- setup/devices.sh@47 -- # setup output config 00:05:14.855 17:52:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.855 17:52:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:16.235 17:52:04 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:16.235 17:52:04 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:16.235 17:52:04 -- setup/devices.sh@63 -- # found=1 00:05:16.235 17:52:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.235 17:52:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:16.235 17:52:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.235 17:52:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:16.235 17:52:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.235 17:52:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:16.235 17:52:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.235 17:52:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:16.235 17:52:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.236 17:52:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:16.236 17:52:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.236 17:52:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:16.236 17:52:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.236 17:52:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:16.236 17:52:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.236 17:52:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:16.236 17:52:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.236 17:52:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:16.236 17:52:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.236 17:52:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:16.236 17:52:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.236 17:52:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:16.236 17:52:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.236 17:52:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:16.236 17:52:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.236 17:52:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:16.236 17:52:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.236 17:52:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:16.236 17:52:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.236 17:52:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:16.236 17:52:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.236 17:52:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:16.236 17:52:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.236 17:52:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:16.236 17:52:04 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:16.236 17:52:04 -- setup/devices.sh@68 -- # return 0 00:05:16.236 17:52:04 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:16.236 17:52:04 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.236 17:52:04 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.236 17:52:04 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:16.236 17:52:04 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:16.236 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:16.236 00:05:16.236 real 0m6.357s 00:05:16.236 user 0m1.483s 00:05:16.236 sys 0m2.506s 00:05:16.236 17:52:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:16.236 17:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:16.236 ************************************ 00:05:16.236 END TEST nvme_mount 00:05:16.236 ************************************ 00:05:16.236 17:52:04 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:16.236 17:52:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.236 17:52:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.236 17:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:16.236 ************************************ 00:05:16.236 START TEST dm_mount 00:05:16.236 ************************************ 00:05:16.236 17:52:05 -- common/autotest_common.sh@1111 -- # dm_mount 00:05:16.236 17:52:05 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:16.236 17:52:05 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:16.236 17:52:05 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:16.236 17:52:05 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:16.236 17:52:05 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:16.236 17:52:05 -- setup/common.sh@40 -- # local part_no=2 00:05:16.236 17:52:05 -- setup/common.sh@41 -- # local size=1073741824 00:05:16.236 17:52:05 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:16.236 17:52:05 -- setup/common.sh@44 -- # parts=() 00:05:16.236 17:52:05 -- setup/common.sh@44 -- # local parts 00:05:16.236 17:52:05 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:16.236 17:52:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:16.236 17:52:05 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:16.236 17:52:05 -- setup/common.sh@46 -- # (( part++ )) 00:05:16.236 17:52:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:16.236 17:52:05 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:16.236 17:52:05 -- setup/common.sh@46 -- # (( part++ )) 00:05:16.236 17:52:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:16.236 17:52:05 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:16.236 17:52:05 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:16.236 17:52:05 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:17.176 Creating new GPT entries in memory. 00:05:17.176 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:17.176 other utilities. 00:05:17.176 17:52:06 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:17.176 17:52:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:17.176 17:52:06 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:17.176 17:52:06 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:17.176 17:52:06 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:18.556 Creating new GPT entries in memory. 00:05:18.556 The operation has completed successfully. 00:05:18.556 17:52:07 -- setup/common.sh@57 -- # (( part++ )) 00:05:18.556 17:52:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.556 17:52:07 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:18.556 17:52:07 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:18.556 17:52:07 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:19.498 The operation has completed successfully. 00:05:19.498 17:52:08 -- setup/common.sh@57 -- # (( part++ )) 00:05:19.498 17:52:08 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.498 17:52:08 -- setup/common.sh@62 -- # wait 3188302 00:05:19.498 17:52:08 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:19.498 17:52:08 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:19.498 17:52:08 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:19.498 17:52:08 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:19.498 17:52:08 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:19.498 17:52:08 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:19.498 17:52:08 -- setup/devices.sh@161 -- # break 00:05:19.498 17:52:08 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:19.498 17:52:08 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:19.498 17:52:08 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:19.498 17:52:08 -- setup/devices.sh@166 -- # dm=dm-0 00:05:19.498 17:52:08 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:19.498 17:52:08 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:19.498 17:52:08 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:19.498 17:52:08 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:19.498 17:52:08 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:19.498 17:52:08 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:19.498 17:52:08 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:19.498 17:52:08 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:19.498 17:52:08 -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:19.498 17:52:08 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:19.498 17:52:08 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:19.498 17:52:08 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:19.498 17:52:08 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:19.498 17:52:08 -- setup/devices.sh@53 -- # local found=0 00:05:19.498 17:52:08 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:19.498 17:52:08 -- setup/devices.sh@56 -- # : 00:05:19.498 17:52:08 -- setup/devices.sh@59 -- # local pci status 00:05:19.498 17:52:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.498 17:52:08 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:19.498 17:52:08 -- setup/devices.sh@47 -- # setup output config 00:05:19.498 17:52:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.498 17:52:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:20.437 17:52:09 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:20.437 17:52:09 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:20.437 17:52:09 -- setup/devices.sh@63 -- # found=1 00:05:20.437 17:52:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.437 17:52:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:20.437 17:52:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.437 17:52:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:20.437 17:52:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.437 17:52:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:20.437 17:52:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.437 17:52:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:20.437 17:52:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.437 17:52:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:20.437 17:52:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.437 17:52:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:20.437 17:52:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.437 17:52:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:20.437 17:52:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.437 17:52:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:20.437 17:52:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.437 17:52:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:20.437 17:52:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.437 17:52:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:20.437 17:52:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.438 17:52:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:20.438 17:52:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.438 17:52:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:20.438 17:52:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.438 17:52:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:20.438 17:52:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.438 17:52:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:20.438 17:52:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.438 17:52:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:20.438 17:52:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.438 17:52:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:20.438 17:52:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.698 17:52:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:20.698 17:52:09 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:20.698 17:52:09 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:20.698 17:52:09 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:20.698 17:52:09 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:20.698 17:52:09 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:20.698 17:52:09 -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:20.698 17:52:09 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:20.698 17:52:09 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:20.698 17:52:09 -- setup/devices.sh@50 -- # local mount_point= 00:05:20.698 17:52:09 -- setup/devices.sh@51 -- # local test_file= 00:05:20.698 17:52:09 -- setup/devices.sh@53 -- # local found=0 00:05:20.698 17:52:09 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:20.698 17:52:09 -- setup/devices.sh@59 -- # local pci status 00:05:20.698 17:52:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.698 17:52:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:20.698 17:52:09 -- setup/devices.sh@47 -- # setup output config 00:05:20.698 17:52:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.698 17:52:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:22.076 17:52:10 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:22.076 17:52:10 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:22.076 17:52:10 -- setup/devices.sh@63 -- # found=1 00:05:22.076 17:52:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.076 17:52:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:22.076 17:52:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.076 17:52:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:22.076 17:52:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.076 17:52:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:22.076 17:52:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.076 17:52:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:22.076 17:52:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.076 17:52:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:22.076 17:52:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.076 17:52:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:22.076 17:52:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.076 17:52:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:22.076 17:52:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.076 17:52:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:22.076 17:52:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.076 17:52:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:22.076 17:52:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.076 17:52:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:22.076 17:52:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.076 17:52:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:22.076 17:52:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.076 17:52:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:22.076 17:52:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.076 17:52:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:22.076 17:52:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.076 17:52:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:22.076 17:52:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.076 17:52:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:22.076 17:52:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.076 17:52:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:22.077 17:52:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.077 17:52:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.077 17:52:10 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:22.077 17:52:10 -- setup/devices.sh@68 -- # return 0 00:05:22.077 17:52:10 -- setup/devices.sh@187 -- # cleanup_dm 00:05:22.077 17:52:10 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:22.077 17:52:10 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:22.077 17:52:10 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:22.077 17:52:10 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.077 17:52:10 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:22.077 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:22.077 17:52:10 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:22.077 17:52:10 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:22.077 00:05:22.077 real 0m5.732s 00:05:22.077 user 0m0.985s 00:05:22.077 sys 0m1.652s 00:05:22.077 17:52:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.077 17:52:10 -- common/autotest_common.sh@10 -- # set +x 00:05:22.077 ************************************ 00:05:22.077 END TEST dm_mount 00:05:22.077 ************************************ 00:05:22.077 17:52:10 -- setup/devices.sh@1 -- # cleanup 00:05:22.077 17:52:10 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:22.077 17:52:10 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.077 17:52:10 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.077 17:52:10 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:22.077 17:52:10 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.077 17:52:10 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:22.335 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:22.335 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:22.336 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:22.336 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:22.336 17:52:11 -- setup/devices.sh@12 -- # cleanup_dm 00:05:22.336 17:52:11 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:22.336 17:52:11 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:22.336 17:52:11 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.336 17:52:11 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:22.336 17:52:11 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.336 17:52:11 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:22.336 00:05:22.336 real 0m14.460s 00:05:22.336 user 0m3.271s 00:05:22.336 sys 0m5.490s 00:05:22.336 17:52:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.336 17:52:11 -- common/autotest_common.sh@10 -- # set +x 00:05:22.336 ************************************ 00:05:22.336 END TEST devices 00:05:22.336 ************************************ 00:05:22.336 00:05:22.336 real 0m46.437s 00:05:22.336 user 0m13.341s 00:05:22.336 sys 0m21.672s 00:05:22.336 17:52:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.336 17:52:11 -- common/autotest_common.sh@10 -- # set +x 00:05:22.336 ************************************ 00:05:22.336 END TEST setup.sh 00:05:22.336 ************************************ 00:05:22.336 17:52:11 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:23.714 Hugepages 00:05:23.714 node hugesize free / total 00:05:23.714 node0 1048576kB 0 / 0 00:05:23.714 node0 2048kB 2048 / 2048 00:05:23.714 node1 1048576kB 0 / 0 00:05:23.714 node1 2048kB 0 / 0 00:05:23.714 00:05:23.714 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:23.714 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:23.714 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:23.714 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:23.714 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:23.714 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:23.714 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:23.714 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:23.714 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:23.714 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:23.714 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:23.714 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:23.714 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:23.714 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:23.714 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:23.714 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:23.714 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:23.714 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:23.714 17:52:12 -- spdk/autotest.sh@130 -- # uname -s 00:05:23.714 17:52:12 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:23.715 17:52:12 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:23.715 17:52:12 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:25.090 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:25.090 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:25.090 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:25.090 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:25.090 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:25.090 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:25.090 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:25.090 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:25.090 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:25.090 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:25.090 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:25.090 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:25.090 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:25.090 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:25.090 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:25.090 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:26.029 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:26.029 17:52:14 -- common/autotest_common.sh@1518 -- # sleep 1 00:05:26.968 17:52:15 -- common/autotest_common.sh@1519 -- # bdfs=() 00:05:26.968 17:52:15 -- common/autotest_common.sh@1519 -- # local bdfs 00:05:26.968 17:52:15 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:26.968 17:52:15 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:26.968 17:52:15 -- common/autotest_common.sh@1499 -- # bdfs=() 00:05:26.968 17:52:15 -- common/autotest_common.sh@1499 -- # local bdfs 00:05:26.968 17:52:15 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:27.227 17:52:15 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:27.227 17:52:15 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:05:27.227 17:52:15 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:05:27.227 17:52:15 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:82:00.0 00:05:27.227 17:52:15 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:28.602 Waiting for block devices as requested 00:05:28.602 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:05:28.602 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:28.602 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:28.602 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:28.861 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:28.861 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:28.861 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:28.861 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:29.121 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:29.121 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:29.121 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:29.121 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:29.380 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:29.380 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:29.380 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:29.639 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:29.639 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:29.639 17:52:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:29.639 17:52:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:05:29.639 17:52:18 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:05:29.639 17:52:18 -- common/autotest_common.sh@1488 -- # grep 0000:82:00.0/nvme/nvme 00:05:29.639 17:52:18 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:05:29.639 17:52:18 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:05:29.639 17:52:18 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:05:29.639 17:52:18 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:05:29.639 17:52:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:29.639 17:52:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:29.639 17:52:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:29.639 17:52:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:29.639 17:52:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:29.639 17:52:18 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:05:29.639 17:52:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:29.639 17:52:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:29.639 17:52:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:29.639 17:52:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:29.639 17:52:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:29.639 17:52:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:29.639 17:52:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:29.639 17:52:18 -- common/autotest_common.sh@1543 -- # continue 00:05:29.639 17:52:18 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:29.639 17:52:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:29.639 17:52:18 -- common/autotest_common.sh@10 -- # set +x 00:05:29.898 17:52:18 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:29.898 17:52:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:29.898 17:52:18 -- common/autotest_common.sh@10 -- # set +x 00:05:29.898 17:52:18 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:31.275 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:31.275 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:31.275 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:31.275 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:31.275 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:31.275 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:31.275 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:31.275 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:31.275 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:31.275 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:31.275 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:31.275 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:31.275 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:31.275 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:31.275 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:31.275 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:32.214 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:32.474 17:52:21 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:32.474 17:52:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:32.474 17:52:21 -- common/autotest_common.sh@10 -- # set +x 00:05:32.474 17:52:21 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:32.474 17:52:21 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:05:32.474 17:52:21 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:05:32.474 17:52:21 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:32.474 17:52:21 -- common/autotest_common.sh@1563 -- # local bdfs 00:05:32.474 17:52:21 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:05:32.474 17:52:21 -- common/autotest_common.sh@1499 -- # bdfs=() 00:05:32.474 17:52:21 -- common/autotest_common.sh@1499 -- # local bdfs 00:05:32.474 17:52:21 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:32.474 17:52:21 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:32.474 17:52:21 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:05:32.474 17:52:21 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:05:32.474 17:52:21 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:82:00.0 00:05:32.474 17:52:21 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:05:32.474 17:52:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:05:32.474 17:52:21 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:32.474 17:52:21 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:32.474 17:52:21 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:32.474 17:52:21 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:82:00.0 00:05:32.474 17:52:21 -- common/autotest_common.sh@1578 -- # [[ -z 0000:82:00.0 ]] 00:05:32.474 17:52:21 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=3193639 00:05:32.474 17:52:21 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.474 17:52:21 -- common/autotest_common.sh@1584 -- # waitforlisten 3193639 00:05:32.474 17:52:21 -- common/autotest_common.sh@817 -- # '[' -z 3193639 ']' 00:05:32.474 17:52:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.474 17:52:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:32.474 17:52:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.474 17:52:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:32.474 17:52:21 -- common/autotest_common.sh@10 -- # set +x 00:05:32.474 [2024-04-15 17:52:21.338037] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:05:32.474 [2024-04-15 17:52:21.338160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193639 ] 00:05:32.474 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.474 [2024-04-15 17:52:21.412046] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.734 [2024-04-15 17:52:21.507750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.993 17:52:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:32.993 17:52:21 -- common/autotest_common.sh@850 -- # return 0 00:05:32.993 17:52:21 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:05:32.993 17:52:21 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:05:32.993 17:52:21 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:05:36.283 nvme0n1 00:05:36.283 17:52:24 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:36.283 [2024-04-15 17:52:25.200586] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:36.283 [2024-04-15 17:52:25.200640] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:36.283 request: 00:05:36.283 { 00:05:36.283 "nvme_ctrlr_name": "nvme0", 00:05:36.283 "password": "test", 00:05:36.283 "method": "bdev_nvme_opal_revert", 00:05:36.283 "req_id": 1 00:05:36.283 } 00:05:36.283 Got JSON-RPC error response 00:05:36.283 response: 00:05:36.283 { 00:05:36.283 "code": -32603, 00:05:36.283 "message": "Internal error" 00:05:36.283 } 00:05:36.283 17:52:25 -- common/autotest_common.sh@1590 -- # true 00:05:36.283 17:52:25 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:05:36.283 17:52:25 -- common/autotest_common.sh@1594 -- # killprocess 3193639 00:05:36.283 17:52:25 -- common/autotest_common.sh@936 -- # '[' -z 3193639 ']' 00:05:36.283 17:52:25 -- common/autotest_common.sh@940 -- # kill -0 3193639 00:05:36.283 17:52:25 -- common/autotest_common.sh@941 -- # uname 00:05:36.283 17:52:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:36.283 17:52:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3193639 00:05:36.545 17:52:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:36.545 17:52:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:36.545 17:52:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3193639' 00:05:36.545 killing process with pid 3193639 00:05:36.545 17:52:25 -- common/autotest_common.sh@955 -- # kill 3193639 00:05:36.545 17:52:25 -- common/autotest_common.sh@960 -- # wait 3193639 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.545 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.546 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:38.452 17:52:26 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:38.452 17:52:26 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:38.452 17:52:26 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:38.452 17:52:26 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:38.452 17:52:26 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:38.452 17:52:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:38.452 17:52:26 -- common/autotest_common.sh@10 -- # set +x 00:05:38.452 17:52:26 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:38.452 17:52:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.452 17:52:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.452 17:52:26 -- common/autotest_common.sh@10 -- # set +x 00:05:38.452 ************************************ 00:05:38.452 START TEST env 00:05:38.452 ************************************ 00:05:38.452 17:52:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:38.452 * Looking for test storage... 00:05:38.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:38.452 17:52:27 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:38.452 17:52:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.452 17:52:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.452 17:52:27 -- common/autotest_common.sh@10 -- # set +x 00:05:38.452 ************************************ 00:05:38.452 START TEST env_memory 00:05:38.452 ************************************ 00:05:38.452 17:52:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:38.452 00:05:38.452 00:05:38.452 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.452 http://cunit.sourceforge.net/ 00:05:38.452 00:05:38.452 00:05:38.452 Suite: memory 00:05:38.452 Test: alloc and free memory map ...[2024-04-15 17:52:27.354953] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:38.452 passed 00:05:38.452 Test: mem map translation ...[2024-04-15 17:52:27.380015] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:38.452 [2024-04-15 17:52:27.380042] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:38.452 [2024-04-15 17:52:27.380100] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:38.452 [2024-04-15 17:52:27.380118] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:38.712 passed 00:05:38.712 Test: mem map registration ...[2024-04-15 17:52:27.432468] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:38.712 [2024-04-15 17:52:27.432493] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:38.712 passed 00:05:38.712 Test: mem map adjacent registrations ...passed 00:05:38.712 00:05:38.712 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.712 suites 1 1 n/a 0 0 00:05:38.712 tests 4 4 4 0 0 00:05:38.712 asserts 152 152 152 0 n/a 00:05:38.712 00:05:38.712 Elapsed time = 0.174 seconds 00:05:38.712 00:05:38.712 real 0m0.182s 00:05:38.712 user 0m0.172s 00:05:38.712 sys 0m0.009s 00:05:38.712 17:52:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.712 17:52:27 -- common/autotest_common.sh@10 -- # set +x 00:05:38.712 ************************************ 00:05:38.712 END TEST env_memory 00:05:38.712 ************************************ 00:05:38.712 17:52:27 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:38.712 17:52:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.712 17:52:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.712 17:52:27 -- common/autotest_common.sh@10 -- # set +x 00:05:38.712 ************************************ 00:05:38.712 START TEST env_vtophys 00:05:38.712 ************************************ 00:05:38.712 17:52:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:38.712 EAL: lib.eal log level changed from notice to debug 00:05:38.712 EAL: Detected lcore 0 as core 0 on socket 0 00:05:38.712 EAL: Detected lcore 1 as core 1 on socket 0 00:05:38.712 EAL: Detected lcore 2 as core 2 on socket 0 00:05:38.712 EAL: Detected lcore 3 as core 3 on socket 0 00:05:38.712 EAL: Detected lcore 4 as core 4 on socket 0 00:05:38.712 EAL: Detected lcore 5 as core 5 on socket 0 00:05:38.712 EAL: Detected lcore 6 as core 8 on socket 0 00:05:38.712 EAL: Detected lcore 7 as core 9 on socket 0 00:05:38.712 EAL: Detected lcore 8 as core 10 on socket 0 00:05:38.712 EAL: Detected lcore 9 as core 11 on socket 0 00:05:38.712 EAL: Detected lcore 10 as core 12 on socket 0 00:05:38.712 EAL: Detected lcore 11 as core 13 on socket 0 00:05:38.712 EAL: Detected lcore 12 as core 0 on socket 1 00:05:38.712 EAL: Detected lcore 13 as core 1 on socket 1 00:05:38.712 EAL: Detected lcore 14 as core 2 on socket 1 00:05:38.712 EAL: Detected lcore 15 as core 3 on socket 1 00:05:38.712 EAL: Detected lcore 16 as core 4 on socket 1 00:05:38.712 EAL: Detected lcore 17 as core 5 on socket 1 00:05:38.712 EAL: Detected lcore 18 as core 8 on socket 1 00:05:38.712 EAL: Detected lcore 19 as core 9 on socket 1 00:05:38.712 EAL: Detected lcore 20 as core 10 on socket 1 00:05:38.712 EAL: Detected lcore 21 as core 11 on socket 1 00:05:38.712 EAL: Detected lcore 22 as core 12 on socket 1 00:05:38.712 EAL: Detected lcore 23 as core 13 on socket 1 00:05:38.712 EAL: Detected lcore 24 as core 0 on socket 0 00:05:38.712 EAL: Detected lcore 25 as core 1 on socket 0 00:05:38.712 EAL: Detected lcore 26 as core 2 on socket 0 00:05:38.712 EAL: Detected lcore 27 as core 3 on socket 0 00:05:38.712 EAL: Detected lcore 28 as core 4 on socket 0 00:05:38.712 EAL: Detected lcore 29 as core 5 on socket 0 00:05:38.712 EAL: Detected lcore 30 as core 8 on socket 0 00:05:38.712 EAL: Detected lcore 31 as core 9 on socket 0 00:05:38.712 EAL: Detected lcore 32 as core 10 on socket 0 00:05:38.712 EAL: Detected lcore 33 as core 11 on socket 0 00:05:38.712 EAL: Detected lcore 34 as core 12 on socket 0 00:05:38.712 EAL: Detected lcore 35 as core 13 on socket 0 00:05:38.712 EAL: Detected lcore 36 as core 0 on socket 1 00:05:38.712 EAL: Detected lcore 37 as core 1 on socket 1 00:05:38.712 EAL: Detected lcore 38 as core 2 on socket 1 00:05:38.712 EAL: Detected lcore 39 as core 3 on socket 1 00:05:38.712 EAL: Detected lcore 40 as core 4 on socket 1 00:05:38.712 EAL: Detected lcore 41 as core 5 on socket 1 00:05:38.712 EAL: Detected lcore 42 as core 8 on socket 1 00:05:38.712 EAL: Detected lcore 43 as core 9 on socket 1 00:05:38.712 EAL: Detected lcore 44 as core 10 on socket 1 00:05:38.712 EAL: Detected lcore 45 as core 11 on socket 1 00:05:38.712 EAL: Detected lcore 46 as core 12 on socket 1 00:05:38.712 EAL: Detected lcore 47 as core 13 on socket 1 00:05:38.712 EAL: Maximum logical cores by configuration: 128 00:05:38.712 EAL: Detected CPU lcores: 48 00:05:38.712 EAL: Detected NUMA nodes: 2 00:05:38.712 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:38.712 EAL: Detected shared linkage of DPDK 00:05:38.712 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:38.712 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:38.712 EAL: Registered [vdev] bus. 00:05:38.712 EAL: bus.vdev log level changed from disabled to notice 00:05:38.712 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:38.712 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:38.712 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:38.712 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:38.712 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:38.712 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:38.712 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:38.712 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:38.712 EAL: No shared files mode enabled, IPC will be disabled 00:05:38.973 EAL: No shared files mode enabled, IPC is disabled 00:05:38.973 EAL: Bus pci wants IOVA as 'DC' 00:05:38.973 EAL: Bus vdev wants IOVA as 'DC' 00:05:38.973 EAL: Buses did not request a specific IOVA mode. 00:05:38.973 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:38.973 EAL: Selected IOVA mode 'VA' 00:05:38.973 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.973 EAL: Probing VFIO support... 00:05:38.973 EAL: IOMMU type 1 (Type 1) is supported 00:05:38.973 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:38.973 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:38.973 EAL: VFIO support initialized 00:05:38.973 EAL: Ask a virtual area of 0x2e000 bytes 00:05:38.973 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:38.973 EAL: Setting up physically contiguous memory... 00:05:38.973 EAL: Setting maximum number of open files to 524288 00:05:38.973 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:38.973 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:38.973 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:38.973 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.973 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:38.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.973 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.973 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:38.973 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:38.973 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.973 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:38.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.973 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.973 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:38.973 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:38.973 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.973 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:38.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.973 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.973 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:38.973 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:38.973 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.973 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:38.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.973 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.973 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:38.973 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:38.973 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:38.973 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.973 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:38.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:38.973 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.973 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:38.973 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:38.973 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.973 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:38.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:38.973 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.973 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:38.973 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:38.973 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.973 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:38.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:38.973 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.973 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:38.973 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:38.973 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.973 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:38.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:38.973 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.973 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:38.973 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:38.973 EAL: Hugepages will be freed exactly as allocated. 00:05:38.973 EAL: No shared files mode enabled, IPC is disabled 00:05:38.973 EAL: No shared files mode enabled, IPC is disabled 00:05:38.973 EAL: TSC frequency is ~2700000 KHz 00:05:38.973 EAL: Main lcore 0 is ready (tid=7f7a579c3a00;cpuset=[0]) 00:05:38.973 EAL: Trying to obtain current memory policy. 00:05:38.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.973 EAL: Restoring previous memory policy: 0 00:05:38.973 EAL: request: mp_malloc_sync 00:05:38.973 EAL: No shared files mode enabled, IPC is disabled 00:05:38.973 EAL: Heap on socket 0 was expanded by 2MB 00:05:38.973 EAL: PCI device 0000:0e:00.0 on NUMA socket 0 00:05:38.973 EAL: probe driver: 8086:1583 net_i40e 00:05:38.973 EAL: Not managed by a supported kernel driver, skipped 00:05:38.973 EAL: PCI device 0000:0e:00.1 on NUMA socket 0 00:05:38.973 EAL: probe driver: 8086:1583 net_i40e 00:05:38.973 EAL: Not managed by a supported kernel driver, skipped 00:05:38.973 EAL: No shared files mode enabled, IPC is disabled 00:05:38.974 EAL: No shared files mode enabled, IPC is disabled 00:05:38.974 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:38.974 EAL: Mem event callback 'spdk:(nil)' registered 00:05:38.974 00:05:38.974 00:05:38.974 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.974 http://cunit.sourceforge.net/ 00:05:38.974 00:05:38.974 00:05:38.974 Suite: components_suite 00:05:38.974 Test: vtophys_malloc_test ...passed 00:05:38.974 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:38.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.974 EAL: Restoring previous memory policy: 4 00:05:38.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.974 EAL: request: mp_malloc_sync 00:05:38.974 EAL: No shared files mode enabled, IPC is disabled 00:05:38.974 EAL: Heap on socket 0 was expanded by 4MB 00:05:38.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.974 EAL: request: mp_malloc_sync 00:05:38.974 EAL: No shared files mode enabled, IPC is disabled 00:05:38.974 EAL: Heap on socket 0 was shrunk by 4MB 00:05:38.974 EAL: Trying to obtain current memory policy. 00:05:38.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.974 EAL: Restoring previous memory policy: 4 00:05:38.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.974 EAL: request: mp_malloc_sync 00:05:38.974 EAL: No shared files mode enabled, IPC is disabled 00:05:38.974 EAL: Heap on socket 0 was expanded by 6MB 00:05:38.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.974 EAL: request: mp_malloc_sync 00:05:38.974 EAL: No shared files mode enabled, IPC is disabled 00:05:38.974 EAL: Heap on socket 0 was shrunk by 6MB 00:05:38.974 EAL: Trying to obtain current memory policy. 00:05:38.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.974 EAL: Restoring previous memory policy: 4 00:05:38.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.974 EAL: request: mp_malloc_sync 00:05:38.974 EAL: No shared files mode enabled, IPC is disabled 00:05:38.974 EAL: Heap on socket 0 was expanded by 10MB 00:05:38.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.974 EAL: request: mp_malloc_sync 00:05:38.974 EAL: No shared files mode enabled, IPC is disabled 00:05:38.974 EAL: Heap on socket 0 was shrunk by 10MB 00:05:38.974 EAL: Trying to obtain current memory policy. 00:05:38.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.974 EAL: Restoring previous memory policy: 4 00:05:38.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.974 EAL: request: mp_malloc_sync 00:05:38.974 EAL: No shared files mode enabled, IPC is disabled 00:05:38.974 EAL: Heap on socket 0 was expanded by 18MB 00:05:38.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.974 EAL: request: mp_malloc_sync 00:05:38.974 EAL: No shared files mode enabled, IPC is disabled 00:05:38.974 EAL: Heap on socket 0 was shrunk by 18MB 00:05:38.974 EAL: Trying to obtain current memory policy. 00:05:38.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.974 EAL: Restoring previous memory policy: 4 00:05:38.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.974 EAL: request: mp_malloc_sync 00:05:38.974 EAL: No shared files mode enabled, IPC is disabled 00:05:38.974 EAL: Heap on socket 0 was expanded by 34MB 00:05:38.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.974 EAL: request: mp_malloc_sync 00:05:38.974 EAL: No shared files mode enabled, IPC is disabled 00:05:38.974 EAL: Heap on socket 0 was shrunk by 34MB 00:05:38.974 EAL: Trying to obtain current memory policy. 00:05:38.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.974 EAL: Restoring previous memory policy: 4 00:05:38.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.974 EAL: request: mp_malloc_sync 00:05:38.974 EAL: No shared files mode enabled, IPC is disabled 00:05:38.974 EAL: Heap on socket 0 was expanded by 66MB 00:05:38.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.974 EAL: request: mp_malloc_sync 00:05:38.974 EAL: No shared files mode enabled, IPC is disabled 00:05:38.974 EAL: Heap on socket 0 was shrunk by 66MB 00:05:38.974 EAL: Trying to obtain current memory policy. 00:05:38.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.974 EAL: Restoring previous memory policy: 4 00:05:38.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.974 EAL: request: mp_malloc_sync 00:05:38.974 EAL: No shared files mode enabled, IPC is disabled 00:05:38.974 EAL: Heap on socket 0 was expanded by 130MB 00:05:38.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.974 EAL: request: mp_malloc_sync 00:05:38.974 EAL: No shared files mode enabled, IPC is disabled 00:05:38.974 EAL: Heap on socket 0 was shrunk by 130MB 00:05:38.974 EAL: Trying to obtain current memory policy. 00:05:38.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.234 EAL: Restoring previous memory policy: 4 00:05:39.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.234 EAL: request: mp_malloc_sync 00:05:39.234 EAL: No shared files mode enabled, IPC is disabled 00:05:39.234 EAL: Heap on socket 0 was expanded by 258MB 00:05:39.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.234 EAL: request: mp_malloc_sync 00:05:39.234 EAL: No shared files mode enabled, IPC is disabled 00:05:39.234 EAL: Heap on socket 0 was shrunk by 258MB 00:05:39.234 EAL: Trying to obtain current memory policy. 00:05:39.234 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.494 EAL: Restoring previous memory policy: 4 00:05:39.494 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.494 EAL: request: mp_malloc_sync 00:05:39.494 EAL: No shared files mode enabled, IPC is disabled 00:05:39.494 EAL: Heap on socket 0 was expanded by 514MB 00:05:39.494 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.494 EAL: request: mp_malloc_sync 00:05:39.494 EAL: No shared files mode enabled, IPC is disabled 00:05:39.494 EAL: Heap on socket 0 was shrunk by 514MB 00:05:39.494 EAL: Trying to obtain current memory policy. 00:05:39.494 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.062 EAL: Restoring previous memory policy: 4 00:05:40.062 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.062 EAL: request: mp_malloc_sync 00:05:40.062 EAL: No shared files mode enabled, IPC is disabled 00:05:40.062 EAL: Heap on socket 0 was expanded by 1026MB 00:05:40.062 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.322 EAL: request: mp_malloc_sync 00:05:40.322 EAL: No shared files mode enabled, IPC is disabled 00:05:40.322 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:40.322 passed 00:05:40.322 00:05:40.322 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.322 suites 1 1 n/a 0 0 00:05:40.322 tests 2 2 2 0 0 00:05:40.322 asserts 497 497 497 0 n/a 00:05:40.322 00:05:40.322 Elapsed time = 1.421 seconds 00:05:40.322 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.322 EAL: request: mp_malloc_sync 00:05:40.322 EAL: No shared files mode enabled, IPC is disabled 00:05:40.322 EAL: Heap on socket 0 was shrunk by 2MB 00:05:40.322 EAL: No shared files mode enabled, IPC is disabled 00:05:40.322 EAL: No shared files mode enabled, IPC is disabled 00:05:40.322 EAL: No shared files mode enabled, IPC is disabled 00:05:40.322 00:05:40.322 real 0m1.544s 00:05:40.322 user 0m0.881s 00:05:40.322 sys 0m0.625s 00:05:40.322 17:52:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.322 17:52:29 -- common/autotest_common.sh@10 -- # set +x 00:05:40.322 ************************************ 00:05:40.322 END TEST env_vtophys 00:05:40.322 ************************************ 00:05:40.322 17:52:29 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:40.322 17:52:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.322 17:52:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.323 17:52:29 -- common/autotest_common.sh@10 -- # set +x 00:05:40.583 ************************************ 00:05:40.583 START TEST env_pci 00:05:40.583 ************************************ 00:05:40.583 17:52:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:40.583 00:05:40.583 00:05:40.583 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.583 http://cunit.sourceforge.net/ 00:05:40.583 00:05:40.583 00:05:40.583 Suite: pci 00:05:40.583 Test: pci_hook ...[2024-04-15 17:52:29.303594] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3194678 has claimed it 00:05:40.583 EAL: Cannot find device (10000:00:01.0) 00:05:40.583 EAL: Failed to attach device on primary process 00:05:40.583 passed 00:05:40.583 00:05:40.583 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.583 suites 1 1 n/a 0 0 00:05:40.583 tests 1 1 1 0 0 00:05:40.583 asserts 25 25 25 0 n/a 00:05:40.583 00:05:40.583 Elapsed time = 0.022 seconds 00:05:40.583 00:05:40.583 real 0m0.033s 00:05:40.583 user 0m0.010s 00:05:40.583 sys 0m0.023s 00:05:40.583 17:52:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.583 17:52:29 -- common/autotest_common.sh@10 -- # set +x 00:05:40.583 ************************************ 00:05:40.583 END TEST env_pci 00:05:40.583 ************************************ 00:05:40.583 17:52:29 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:40.583 17:52:29 -- env/env.sh@15 -- # uname 00:05:40.583 17:52:29 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:40.583 17:52:29 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:40.583 17:52:29 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.583 17:52:29 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:40.583 17:52:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.583 17:52:29 -- common/autotest_common.sh@10 -- # set +x 00:05:40.583 ************************************ 00:05:40.583 START TEST env_dpdk_post_init 00:05:40.583 ************************************ 00:05:40.583 17:52:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.583 EAL: Detected CPU lcores: 48 00:05:40.583 EAL: Detected NUMA nodes: 2 00:05:40.583 EAL: Detected shared linkage of DPDK 00:05:40.583 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.583 EAL: Selected IOVA mode 'VA' 00:05:40.583 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.583 EAL: VFIO support initialized 00:05:40.583 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:40.843 EAL: Using IOMMU type 1 (Type 1) 00:05:40.843 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:40.843 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:40.843 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:40.843 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:40.843 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:40.843 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:40.843 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:40.843 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:40.843 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:40.844 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:40.844 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:40.844 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:40.844 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:40.844 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:40.844 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:40.844 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:41.811 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:05:45.119 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:05:45.119 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:05:45.119 Starting DPDK initialization... 00:05:45.120 Starting SPDK post initialization... 00:05:45.120 SPDK NVMe probe 00:05:45.120 Attaching to 0000:82:00.0 00:05:45.120 Attached to 0000:82:00.0 00:05:45.120 Cleaning up... 00:05:45.120 00:05:45.120 real 0m4.421s 00:05:45.120 user 0m3.248s 00:05:45.120 sys 0m0.226s 00:05:45.120 17:52:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:45.120 17:52:33 -- common/autotest_common.sh@10 -- # set +x 00:05:45.120 ************************************ 00:05:45.120 END TEST env_dpdk_post_init 00:05:45.120 ************************************ 00:05:45.120 17:52:33 -- env/env.sh@26 -- # uname 00:05:45.120 17:52:33 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:45.120 17:52:33 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:45.120 17:52:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.120 17:52:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.120 17:52:33 -- common/autotest_common.sh@10 -- # set +x 00:05:45.120 ************************************ 00:05:45.120 START TEST env_mem_callbacks 00:05:45.120 ************************************ 00:05:45.120 17:52:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:45.379 EAL: Detected CPU lcores: 48 00:05:45.379 EAL: Detected NUMA nodes: 2 00:05:45.379 EAL: Detected shared linkage of DPDK 00:05:45.379 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:45.379 EAL: Selected IOVA mode 'VA' 00:05:45.379 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.379 EAL: VFIO support initialized 00:05:45.379 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:45.379 00:05:45.379 00:05:45.379 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.379 http://cunit.sourceforge.net/ 00:05:45.379 00:05:45.379 00:05:45.379 Suite: memory 00:05:45.379 Test: test ... 00:05:45.379 register 0x200000200000 2097152 00:05:45.379 malloc 3145728 00:05:45.379 register 0x200000400000 4194304 00:05:45.379 buf 0x200000500000 len 3145728 PASSED 00:05:45.379 malloc 64 00:05:45.379 buf 0x2000004fff40 len 64 PASSED 00:05:45.379 malloc 4194304 00:05:45.379 register 0x200000800000 6291456 00:05:45.379 buf 0x200000a00000 len 4194304 PASSED 00:05:45.379 free 0x200000500000 3145728 00:05:45.379 free 0x2000004fff40 64 00:05:45.379 unregister 0x200000400000 4194304 PASSED 00:05:45.379 free 0x200000a00000 4194304 00:05:45.379 unregister 0x200000800000 6291456 PASSED 00:05:45.379 malloc 8388608 00:05:45.379 register 0x200000400000 10485760 00:05:45.379 buf 0x200000600000 len 8388608 PASSED 00:05:45.379 free 0x200000600000 8388608 00:05:45.379 unregister 0x200000400000 10485760 PASSED 00:05:45.379 passed 00:05:45.379 00:05:45.379 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.379 suites 1 1 n/a 0 0 00:05:45.379 tests 1 1 1 0 0 00:05:45.379 asserts 15 15 15 0 n/a 00:05:45.379 00:05:45.379 Elapsed time = 0.006 seconds 00:05:45.379 00:05:45.379 real 0m0.097s 00:05:45.379 user 0m0.023s 00:05:45.379 sys 0m0.073s 00:05:45.379 17:52:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:45.379 17:52:34 -- common/autotest_common.sh@10 -- # set +x 00:05:45.379 ************************************ 00:05:45.379 END TEST env_mem_callbacks 00:05:45.379 ************************************ 00:05:45.379 00:05:45.379 real 0m7.075s 00:05:45.379 user 0m4.647s 00:05:45.379 sys 0m1.400s 00:05:45.379 17:52:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:45.379 17:52:34 -- common/autotest_common.sh@10 -- # set +x 00:05:45.379 ************************************ 00:05:45.379 END TEST env 00:05:45.379 ************************************ 00:05:45.379 17:52:34 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:45.379 17:52:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.379 17:52:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.379 17:52:34 -- common/autotest_common.sh@10 -- # set +x 00:05:45.379 ************************************ 00:05:45.379 START TEST rpc 00:05:45.379 ************************************ 00:05:45.379 17:52:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:45.639 * Looking for test storage... 00:05:45.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:45.639 17:52:34 -- rpc/rpc.sh@65 -- # spdk_pid=3195355 00:05:45.639 17:52:34 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:45.639 17:52:34 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.639 17:52:34 -- rpc/rpc.sh@67 -- # waitforlisten 3195355 00:05:45.639 17:52:34 -- common/autotest_common.sh@817 -- # '[' -z 3195355 ']' 00:05:45.639 17:52:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.639 17:52:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:45.639 17:52:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.639 17:52:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:45.639 17:52:34 -- common/autotest_common.sh@10 -- # set +x 00:05:45.639 [2024-04-15 17:52:34.450000] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:05:45.639 [2024-04-15 17:52:34.450133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3195355 ] 00:05:45.639 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.639 [2024-04-15 17:52:34.528275] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.899 [2024-04-15 17:52:34.624616] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:45.899 [2024-04-15 17:52:34.624691] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3195355' to capture a snapshot of events at runtime. 00:05:45.899 [2024-04-15 17:52:34.624708] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:45.899 [2024-04-15 17:52:34.624722] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:45.899 [2024-04-15 17:52:34.624735] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3195355 for offline analysis/debug. 00:05:45.899 [2024-04-15 17:52:34.624769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.159 17:52:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:46.159 17:52:34 -- common/autotest_common.sh@850 -- # return 0 00:05:46.159 17:52:34 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:46.159 17:52:34 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:46.159 17:52:34 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:46.159 17:52:34 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:46.159 17:52:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.159 17:52:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.159 17:52:34 -- common/autotest_common.sh@10 -- # set +x 00:05:46.159 ************************************ 00:05:46.159 START TEST rpc_integrity 00:05:46.159 ************************************ 00:05:46.159 17:52:35 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:46.159 17:52:35 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.159 17:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.159 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.159 17:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.159 17:52:35 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.159 17:52:35 -- rpc/rpc.sh@13 -- # jq length 00:05:46.159 17:52:35 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.159 17:52:35 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.418 17:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.418 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.418 17:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.418 17:52:35 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:46.418 17:52:35 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.418 17:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.418 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.418 17:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.418 17:52:35 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.418 { 00:05:46.418 "name": "Malloc0", 00:05:46.418 "aliases": [ 00:05:46.418 "37aa6ff4-5d86-4d17-887c-b8bbaf751d7a" 00:05:46.418 ], 00:05:46.418 "product_name": "Malloc disk", 00:05:46.418 "block_size": 512, 00:05:46.418 "num_blocks": 16384, 00:05:46.418 "uuid": "37aa6ff4-5d86-4d17-887c-b8bbaf751d7a", 00:05:46.418 "assigned_rate_limits": { 00:05:46.418 "rw_ios_per_sec": 0, 00:05:46.418 "rw_mbytes_per_sec": 0, 00:05:46.418 "r_mbytes_per_sec": 0, 00:05:46.418 "w_mbytes_per_sec": 0 00:05:46.418 }, 00:05:46.418 "claimed": false, 00:05:46.418 "zoned": false, 00:05:46.418 "supported_io_types": { 00:05:46.418 "read": true, 00:05:46.418 "write": true, 00:05:46.418 "unmap": true, 00:05:46.418 "write_zeroes": true, 00:05:46.418 "flush": true, 00:05:46.418 "reset": true, 00:05:46.418 "compare": false, 00:05:46.418 "compare_and_write": false, 00:05:46.418 "abort": true, 00:05:46.418 "nvme_admin": false, 00:05:46.418 "nvme_io": false 00:05:46.418 }, 00:05:46.418 "memory_domains": [ 00:05:46.418 { 00:05:46.418 "dma_device_id": "system", 00:05:46.418 "dma_device_type": 1 00:05:46.418 }, 00:05:46.418 { 00:05:46.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.418 "dma_device_type": 2 00:05:46.418 } 00:05:46.418 ], 00:05:46.418 "driver_specific": {} 00:05:46.418 } 00:05:46.418 ]' 00:05:46.418 17:52:35 -- rpc/rpc.sh@17 -- # jq length 00:05:46.418 17:52:35 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.418 17:52:35 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:46.418 17:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.418 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.418 [2024-04-15 17:52:35.179681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:46.418 [2024-04-15 17:52:35.179731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.418 [2024-04-15 17:52:35.179757] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaa88d0 00:05:46.418 [2024-04-15 17:52:35.179773] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.418 [2024-04-15 17:52:35.181214] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.418 [2024-04-15 17:52:35.181242] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.418 Passthru0 00:05:46.418 17:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.418 17:52:35 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.418 17:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.418 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.418 17:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.418 17:52:35 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.418 { 00:05:46.418 "name": "Malloc0", 00:05:46.418 "aliases": [ 00:05:46.418 "37aa6ff4-5d86-4d17-887c-b8bbaf751d7a" 00:05:46.418 ], 00:05:46.418 "product_name": "Malloc disk", 00:05:46.418 "block_size": 512, 00:05:46.418 "num_blocks": 16384, 00:05:46.418 "uuid": "37aa6ff4-5d86-4d17-887c-b8bbaf751d7a", 00:05:46.418 "assigned_rate_limits": { 00:05:46.418 "rw_ios_per_sec": 0, 00:05:46.418 "rw_mbytes_per_sec": 0, 00:05:46.418 "r_mbytes_per_sec": 0, 00:05:46.418 "w_mbytes_per_sec": 0 00:05:46.418 }, 00:05:46.418 "claimed": true, 00:05:46.418 "claim_type": "exclusive_write", 00:05:46.418 "zoned": false, 00:05:46.418 "supported_io_types": { 00:05:46.418 "read": true, 00:05:46.418 "write": true, 00:05:46.418 "unmap": true, 00:05:46.418 "write_zeroes": true, 00:05:46.418 "flush": true, 00:05:46.418 "reset": true, 00:05:46.418 "compare": false, 00:05:46.418 "compare_and_write": false, 00:05:46.418 "abort": true, 00:05:46.418 "nvme_admin": false, 00:05:46.418 "nvme_io": false 00:05:46.418 }, 00:05:46.418 "memory_domains": [ 00:05:46.418 { 00:05:46.418 "dma_device_id": "system", 00:05:46.419 "dma_device_type": 1 00:05:46.419 }, 00:05:46.419 { 00:05:46.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.419 "dma_device_type": 2 00:05:46.419 } 00:05:46.419 ], 00:05:46.419 "driver_specific": {} 00:05:46.419 }, 00:05:46.419 { 00:05:46.419 "name": "Passthru0", 00:05:46.419 "aliases": [ 00:05:46.419 "630edbd2-9164-5bb3-b5c8-b05fda4981ef" 00:05:46.419 ], 00:05:46.419 "product_name": "passthru", 00:05:46.419 "block_size": 512, 00:05:46.419 "num_blocks": 16384, 00:05:46.419 "uuid": "630edbd2-9164-5bb3-b5c8-b05fda4981ef", 00:05:46.419 "assigned_rate_limits": { 00:05:46.419 "rw_ios_per_sec": 0, 00:05:46.419 "rw_mbytes_per_sec": 0, 00:05:46.419 "r_mbytes_per_sec": 0, 00:05:46.419 "w_mbytes_per_sec": 0 00:05:46.419 }, 00:05:46.419 "claimed": false, 00:05:46.419 "zoned": false, 00:05:46.419 "supported_io_types": { 00:05:46.419 "read": true, 00:05:46.419 "write": true, 00:05:46.419 "unmap": true, 00:05:46.419 "write_zeroes": true, 00:05:46.419 "flush": true, 00:05:46.419 "reset": true, 00:05:46.419 "compare": false, 00:05:46.419 "compare_and_write": false, 00:05:46.419 "abort": true, 00:05:46.419 "nvme_admin": false, 00:05:46.419 "nvme_io": false 00:05:46.419 }, 00:05:46.419 "memory_domains": [ 00:05:46.419 { 00:05:46.419 "dma_device_id": "system", 00:05:46.419 "dma_device_type": 1 00:05:46.419 }, 00:05:46.419 { 00:05:46.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.419 "dma_device_type": 2 00:05:46.419 } 00:05:46.419 ], 00:05:46.419 "driver_specific": { 00:05:46.419 "passthru": { 00:05:46.419 "name": "Passthru0", 00:05:46.419 "base_bdev_name": "Malloc0" 00:05:46.419 } 00:05:46.419 } 00:05:46.419 } 00:05:46.419 ]' 00:05:46.419 17:52:35 -- rpc/rpc.sh@21 -- # jq length 00:05:46.419 17:52:35 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.419 17:52:35 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.419 17:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.419 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.419 17:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.419 17:52:35 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:46.419 17:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.419 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.419 17:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.419 17:52:35 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.419 17:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.419 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.419 17:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.419 17:52:35 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.419 17:52:35 -- rpc/rpc.sh@26 -- # jq length 00:05:46.419 17:52:35 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.419 00:05:46.419 real 0m0.284s 00:05:46.419 user 0m0.200s 00:05:46.419 sys 0m0.024s 00:05:46.419 17:52:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:46.419 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.419 ************************************ 00:05:46.419 END TEST rpc_integrity 00:05:46.419 ************************************ 00:05:46.419 17:52:35 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:46.419 17:52:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.419 17:52:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.419 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.679 ************************************ 00:05:46.679 START TEST rpc_plugins 00:05:46.679 ************************************ 00:05:46.679 17:52:35 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:05:46.679 17:52:35 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:46.679 17:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.679 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.679 17:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.679 17:52:35 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:46.679 17:52:35 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:46.679 17:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.679 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.679 17:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.679 17:52:35 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:46.679 { 00:05:46.679 "name": "Malloc1", 00:05:46.679 "aliases": [ 00:05:46.679 "6d3880c6-5287-4095-826f-8563db3086fa" 00:05:46.679 ], 00:05:46.679 "product_name": "Malloc disk", 00:05:46.679 "block_size": 4096, 00:05:46.679 "num_blocks": 256, 00:05:46.679 "uuid": "6d3880c6-5287-4095-826f-8563db3086fa", 00:05:46.679 "assigned_rate_limits": { 00:05:46.679 "rw_ios_per_sec": 0, 00:05:46.679 "rw_mbytes_per_sec": 0, 00:05:46.679 "r_mbytes_per_sec": 0, 00:05:46.679 "w_mbytes_per_sec": 0 00:05:46.679 }, 00:05:46.679 "claimed": false, 00:05:46.679 "zoned": false, 00:05:46.679 "supported_io_types": { 00:05:46.679 "read": true, 00:05:46.679 "write": true, 00:05:46.679 "unmap": true, 00:05:46.679 "write_zeroes": true, 00:05:46.679 "flush": true, 00:05:46.679 "reset": true, 00:05:46.679 "compare": false, 00:05:46.679 "compare_and_write": false, 00:05:46.679 "abort": true, 00:05:46.679 "nvme_admin": false, 00:05:46.679 "nvme_io": false 00:05:46.679 }, 00:05:46.679 "memory_domains": [ 00:05:46.679 { 00:05:46.679 "dma_device_id": "system", 00:05:46.679 "dma_device_type": 1 00:05:46.679 }, 00:05:46.679 { 00:05:46.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.679 "dma_device_type": 2 00:05:46.679 } 00:05:46.679 ], 00:05:46.679 "driver_specific": {} 00:05:46.679 } 00:05:46.679 ]' 00:05:46.679 17:52:35 -- rpc/rpc.sh@32 -- # jq length 00:05:46.679 17:52:35 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:46.679 17:52:35 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:46.679 17:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.679 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.679 17:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.679 17:52:35 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:46.679 17:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.679 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.679 17:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.679 17:52:35 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:46.679 17:52:35 -- rpc/rpc.sh@36 -- # jq length 00:05:46.680 17:52:35 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:46.680 00:05:46.680 real 0m0.160s 00:05:46.680 user 0m0.118s 00:05:46.680 sys 0m0.012s 00:05:46.680 17:52:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:46.680 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.680 ************************************ 00:05:46.680 END TEST rpc_plugins 00:05:46.680 ************************************ 00:05:46.940 17:52:35 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:46.940 17:52:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.940 17:52:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.940 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.940 ************************************ 00:05:46.940 START TEST rpc_trace_cmd_test 00:05:46.940 ************************************ 00:05:46.940 17:52:35 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:05:46.940 17:52:35 -- rpc/rpc.sh@40 -- # local info 00:05:46.940 17:52:35 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:46.940 17:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.940 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.940 17:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.940 17:52:35 -- rpc/rpc.sh@42 -- # info='{ 00:05:46.940 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3195355", 00:05:46.940 "tpoint_group_mask": "0x8", 00:05:46.940 "iscsi_conn": { 00:05:46.940 "mask": "0x2", 00:05:46.940 "tpoint_mask": "0x0" 00:05:46.940 }, 00:05:46.940 "scsi": { 00:05:46.940 "mask": "0x4", 00:05:46.940 "tpoint_mask": "0x0" 00:05:46.940 }, 00:05:46.940 "bdev": { 00:05:46.940 "mask": "0x8", 00:05:46.940 "tpoint_mask": "0xffffffffffffffff" 00:05:46.940 }, 00:05:46.940 "nvmf_rdma": { 00:05:46.940 "mask": "0x10", 00:05:46.940 "tpoint_mask": "0x0" 00:05:46.940 }, 00:05:46.940 "nvmf_tcp": { 00:05:46.940 "mask": "0x20", 00:05:46.940 "tpoint_mask": "0x0" 00:05:46.940 }, 00:05:46.940 "ftl": { 00:05:46.940 "mask": "0x40", 00:05:46.940 "tpoint_mask": "0x0" 00:05:46.940 }, 00:05:46.940 "blobfs": { 00:05:46.940 "mask": "0x80", 00:05:46.940 "tpoint_mask": "0x0" 00:05:46.940 }, 00:05:46.940 "dsa": { 00:05:46.940 "mask": "0x200", 00:05:46.940 "tpoint_mask": "0x0" 00:05:46.940 }, 00:05:46.940 "thread": { 00:05:46.940 "mask": "0x400", 00:05:46.940 "tpoint_mask": "0x0" 00:05:46.940 }, 00:05:46.940 "nvme_pcie": { 00:05:46.940 "mask": "0x800", 00:05:46.940 "tpoint_mask": "0x0" 00:05:46.940 }, 00:05:46.940 "iaa": { 00:05:46.940 "mask": "0x1000", 00:05:46.940 "tpoint_mask": "0x0" 00:05:46.940 }, 00:05:46.940 "nvme_tcp": { 00:05:46.940 "mask": "0x2000", 00:05:46.940 "tpoint_mask": "0x0" 00:05:46.940 }, 00:05:46.940 "bdev_nvme": { 00:05:46.940 "mask": "0x4000", 00:05:46.940 "tpoint_mask": "0x0" 00:05:46.940 }, 00:05:46.940 "sock": { 00:05:46.940 "mask": "0x8000", 00:05:46.940 "tpoint_mask": "0x0" 00:05:46.940 } 00:05:46.940 }' 00:05:46.940 17:52:35 -- rpc/rpc.sh@43 -- # jq length 00:05:46.940 17:52:35 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:46.940 17:52:35 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:46.940 17:52:35 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:46.940 17:52:35 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:47.199 17:52:35 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:47.199 17:52:35 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:47.199 17:52:35 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:47.199 17:52:35 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:47.199 17:52:36 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:47.199 00:05:47.199 real 0m0.277s 00:05:47.199 user 0m0.247s 00:05:47.199 sys 0m0.023s 00:05:47.199 17:52:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:47.199 17:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.199 ************************************ 00:05:47.199 END TEST rpc_trace_cmd_test 00:05:47.199 ************************************ 00:05:47.199 17:52:36 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:47.199 17:52:36 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:47.199 17:52:36 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:47.199 17:52:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.199 17:52:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.199 17:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.462 ************************************ 00:05:47.462 START TEST rpc_daemon_integrity 00:05:47.462 ************************************ 00:05:47.462 17:52:36 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:47.462 17:52:36 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:47.462 17:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:47.462 17:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.462 17:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:47.462 17:52:36 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:47.462 17:52:36 -- rpc/rpc.sh@13 -- # jq length 00:05:47.462 17:52:36 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:47.462 17:52:36 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:47.462 17:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:47.462 17:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.462 17:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:47.462 17:52:36 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:47.462 17:52:36 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:47.462 17:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:47.462 17:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.462 17:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:47.462 17:52:36 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:47.462 { 00:05:47.462 "name": "Malloc2", 00:05:47.462 "aliases": [ 00:05:47.462 "687d0ecc-e071-4c95-910b-c46cd367be17" 00:05:47.462 ], 00:05:47.462 "product_name": "Malloc disk", 00:05:47.462 "block_size": 512, 00:05:47.462 "num_blocks": 16384, 00:05:47.462 "uuid": "687d0ecc-e071-4c95-910b-c46cd367be17", 00:05:47.462 "assigned_rate_limits": { 00:05:47.462 "rw_ios_per_sec": 0, 00:05:47.462 "rw_mbytes_per_sec": 0, 00:05:47.462 "r_mbytes_per_sec": 0, 00:05:47.462 "w_mbytes_per_sec": 0 00:05:47.462 }, 00:05:47.462 "claimed": false, 00:05:47.462 "zoned": false, 00:05:47.462 "supported_io_types": { 00:05:47.462 "read": true, 00:05:47.462 "write": true, 00:05:47.462 "unmap": true, 00:05:47.462 "write_zeroes": true, 00:05:47.462 "flush": true, 00:05:47.462 "reset": true, 00:05:47.462 "compare": false, 00:05:47.462 "compare_and_write": false, 00:05:47.462 "abort": true, 00:05:47.462 "nvme_admin": false, 00:05:47.462 "nvme_io": false 00:05:47.462 }, 00:05:47.462 "memory_domains": [ 00:05:47.462 { 00:05:47.462 "dma_device_id": "system", 00:05:47.462 "dma_device_type": 1 00:05:47.462 }, 00:05:47.462 { 00:05:47.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.462 "dma_device_type": 2 00:05:47.462 } 00:05:47.462 ], 00:05:47.462 "driver_specific": {} 00:05:47.462 } 00:05:47.462 ]' 00:05:47.462 17:52:36 -- rpc/rpc.sh@17 -- # jq length 00:05:47.462 17:52:36 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:47.462 17:52:36 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:47.462 17:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:47.462 17:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.462 [2024-04-15 17:52:36.339734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:47.462 [2024-04-15 17:52:36.339782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:47.462 [2024-04-15 17:52:36.339814] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8fa230 00:05:47.462 [2024-04-15 17:52:36.339830] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:47.462 [2024-04-15 17:52:36.341159] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:47.462 [2024-04-15 17:52:36.341190] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:47.462 Passthru0 00:05:47.462 17:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:47.462 17:52:36 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:47.462 17:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:47.462 17:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.462 17:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:47.462 17:52:36 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:47.462 { 00:05:47.462 "name": "Malloc2", 00:05:47.462 "aliases": [ 00:05:47.462 "687d0ecc-e071-4c95-910b-c46cd367be17" 00:05:47.462 ], 00:05:47.462 "product_name": "Malloc disk", 00:05:47.462 "block_size": 512, 00:05:47.462 "num_blocks": 16384, 00:05:47.462 "uuid": "687d0ecc-e071-4c95-910b-c46cd367be17", 00:05:47.462 "assigned_rate_limits": { 00:05:47.462 "rw_ios_per_sec": 0, 00:05:47.462 "rw_mbytes_per_sec": 0, 00:05:47.462 "r_mbytes_per_sec": 0, 00:05:47.462 "w_mbytes_per_sec": 0 00:05:47.462 }, 00:05:47.462 "claimed": true, 00:05:47.462 "claim_type": "exclusive_write", 00:05:47.462 "zoned": false, 00:05:47.462 "supported_io_types": { 00:05:47.462 "read": true, 00:05:47.462 "write": true, 00:05:47.462 "unmap": true, 00:05:47.462 "write_zeroes": true, 00:05:47.462 "flush": true, 00:05:47.462 "reset": true, 00:05:47.462 "compare": false, 00:05:47.462 "compare_and_write": false, 00:05:47.462 "abort": true, 00:05:47.462 "nvme_admin": false, 00:05:47.462 "nvme_io": false 00:05:47.462 }, 00:05:47.462 "memory_domains": [ 00:05:47.462 { 00:05:47.462 "dma_device_id": "system", 00:05:47.462 "dma_device_type": 1 00:05:47.462 }, 00:05:47.462 { 00:05:47.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.462 "dma_device_type": 2 00:05:47.462 } 00:05:47.462 ], 00:05:47.462 "driver_specific": {} 00:05:47.462 }, 00:05:47.462 { 00:05:47.462 "name": "Passthru0", 00:05:47.462 "aliases": [ 00:05:47.462 "71b4550e-3261-5623-a09a-59d4c30400d4" 00:05:47.462 ], 00:05:47.462 "product_name": "passthru", 00:05:47.462 "block_size": 512, 00:05:47.462 "num_blocks": 16384, 00:05:47.462 "uuid": "71b4550e-3261-5623-a09a-59d4c30400d4", 00:05:47.462 "assigned_rate_limits": { 00:05:47.462 "rw_ios_per_sec": 0, 00:05:47.462 "rw_mbytes_per_sec": 0, 00:05:47.462 "r_mbytes_per_sec": 0, 00:05:47.462 "w_mbytes_per_sec": 0 00:05:47.462 }, 00:05:47.462 "claimed": false, 00:05:47.462 "zoned": false, 00:05:47.462 "supported_io_types": { 00:05:47.462 "read": true, 00:05:47.462 "write": true, 00:05:47.462 "unmap": true, 00:05:47.462 "write_zeroes": true, 00:05:47.462 "flush": true, 00:05:47.462 "reset": true, 00:05:47.462 "compare": false, 00:05:47.462 "compare_and_write": false, 00:05:47.462 "abort": true, 00:05:47.462 "nvme_admin": false, 00:05:47.462 "nvme_io": false 00:05:47.462 }, 00:05:47.462 "memory_domains": [ 00:05:47.462 { 00:05:47.462 "dma_device_id": "system", 00:05:47.462 "dma_device_type": 1 00:05:47.462 }, 00:05:47.462 { 00:05:47.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.462 "dma_device_type": 2 00:05:47.462 } 00:05:47.462 ], 00:05:47.463 "driver_specific": { 00:05:47.463 "passthru": { 00:05:47.463 "name": "Passthru0", 00:05:47.463 "base_bdev_name": "Malloc2" 00:05:47.463 } 00:05:47.463 } 00:05:47.463 } 00:05:47.463 ]' 00:05:47.463 17:52:36 -- rpc/rpc.sh@21 -- # jq length 00:05:47.463 17:52:36 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:47.463 17:52:36 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:47.463 17:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:47.463 17:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.463 17:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:47.463 17:52:36 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:47.463 17:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:47.463 17:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.463 17:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:47.463 17:52:36 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:47.463 17:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:47.463 17:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.723 17:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:47.723 17:52:36 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:47.723 17:52:36 -- rpc/rpc.sh@26 -- # jq length 00:05:47.723 17:52:36 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:47.723 00:05:47.723 real 0m0.236s 00:05:47.723 user 0m0.158s 00:05:47.723 sys 0m0.022s 00:05:47.723 17:52:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:47.723 17:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.723 ************************************ 00:05:47.723 END TEST rpc_daemon_integrity 00:05:47.723 ************************************ 00:05:47.723 17:52:36 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:47.723 17:52:36 -- rpc/rpc.sh@84 -- # killprocess 3195355 00:05:47.723 17:52:36 -- common/autotest_common.sh@936 -- # '[' -z 3195355 ']' 00:05:47.723 17:52:36 -- common/autotest_common.sh@940 -- # kill -0 3195355 00:05:47.723 17:52:36 -- common/autotest_common.sh@941 -- # uname 00:05:47.723 17:52:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:47.723 17:52:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3195355 00:05:47.723 17:52:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:47.723 17:52:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:47.723 17:52:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3195355' 00:05:47.723 killing process with pid 3195355 00:05:47.723 17:52:36 -- common/autotest_common.sh@955 -- # kill 3195355 00:05:47.723 17:52:36 -- common/autotest_common.sh@960 -- # wait 3195355 00:05:48.292 00:05:48.292 real 0m2.625s 00:05:48.292 user 0m3.466s 00:05:48.292 sys 0m0.871s 00:05:48.292 17:52:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:48.292 17:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:48.292 ************************************ 00:05:48.292 END TEST rpc 00:05:48.292 ************************************ 00:05:48.292 17:52:36 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:48.292 17:52:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.292 17:52:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.292 17:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:48.292 ************************************ 00:05:48.292 START TEST skip_rpc 00:05:48.292 ************************************ 00:05:48.292 17:52:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:48.292 * Looking for test storage... 00:05:48.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:48.292 17:52:37 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:48.292 17:52:37 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:48.292 17:52:37 -- rpc/skip_rpc.sh@60 -- # run_test skip_rpc test_skip_rpc 00:05:48.292 17:52:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.292 17:52:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.292 17:52:37 -- common/autotest_common.sh@10 -- # set +x 00:05:48.292 ************************************ 00:05:48.292 START TEST skip_rpc 00:05:48.292 ************************************ 00:05:48.292 17:52:37 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:05:48.292 17:52:37 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3195965 00:05:48.292 17:52:37 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:48.292 17:52:37 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.292 17:52:37 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:48.551 [2024-04-15 17:52:37.291748] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:05:48.551 [2024-04-15 17:52:37.291834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3195965 ] 00:05:48.551 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.551 [2024-04-15 17:52:37.360030] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.551 [2024-04-15 17:52:37.453730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.551 [2024-04-15 17:52:37.453817] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:53.822 17:52:42 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:53.822 17:52:42 -- common/autotest_common.sh@638 -- # local es=0 00:05:53.822 17:52:42 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:53.822 17:52:42 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:53.822 17:52:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:53.822 17:52:42 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:53.822 17:52:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:53.822 17:52:42 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:05:53.822 17:52:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:53.822 17:52:42 -- common/autotest_common.sh@10 -- # set +x 00:05:53.822 17:52:42 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:53.822 17:52:42 -- common/autotest_common.sh@641 -- # es=1 00:05:53.822 17:52:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:53.822 17:52:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:53.822 17:52:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:53.822 17:52:42 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:53.822 17:52:42 -- rpc/skip_rpc.sh@23 -- # killprocess 3195965 00:05:53.822 17:52:42 -- common/autotest_common.sh@936 -- # '[' -z 3195965 ']' 00:05:53.822 17:52:42 -- common/autotest_common.sh@940 -- # kill -0 3195965 00:05:53.822 17:52:42 -- common/autotest_common.sh@941 -- # uname 00:05:53.822 17:52:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.822 17:52:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3195965 00:05:53.822 17:52:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.822 17:52:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.822 17:52:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3195965' 00:05:53.822 killing process with pid 3195965 00:05:53.822 17:52:42 -- common/autotest_common.sh@955 -- # kill 3195965 00:05:53.822 17:52:42 -- common/autotest_common.sh@960 -- # wait 3195965 00:05:53.822 00:05:53.822 real 0m5.462s 00:05:53.822 user 0m5.145s 00:05:53.822 sys 0m0.334s 00:05:53.822 17:52:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:53.822 17:52:42 -- common/autotest_common.sh@10 -- # set +x 00:05:53.822 ************************************ 00:05:53.822 END TEST skip_rpc 00:05:53.822 ************************************ 00:05:53.822 17:52:42 -- rpc/skip_rpc.sh@61 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:53.822 17:52:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.822 17:52:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.822 17:52:42 -- common/autotest_common.sh@10 -- # set +x 00:05:54.081 ************************************ 00:05:54.081 START TEST skip_rpc_with_json 00:05:54.081 ************************************ 00:05:54.081 17:52:42 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:05:54.081 17:52:42 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:54.081 17:52:42 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3196662 00:05:54.081 17:52:42 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.081 17:52:42 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.081 17:52:42 -- rpc/skip_rpc.sh@31 -- # waitforlisten 3196662 00:05:54.081 17:52:42 -- common/autotest_common.sh@817 -- # '[' -z 3196662 ']' 00:05:54.081 17:52:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.081 17:52:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:54.081 17:52:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.081 17:52:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:54.081 17:52:42 -- common/autotest_common.sh@10 -- # set +x 00:05:54.081 [2024-04-15 17:52:42.949167] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:05:54.081 [2024-04-15 17:52:42.949276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3196662 ] 00:05:54.081 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.081 [2024-04-15 17:52:43.025534] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.339 [2024-04-15 17:52:43.120362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.597 17:52:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:54.597 17:52:43 -- common/autotest_common.sh@850 -- # return 0 00:05:54.597 17:52:43 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:54.597 17:52:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.597 17:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:54.597 [2024-04-15 17:52:43.416095] nvmf_rpc.c:2500:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:54.597 request: 00:05:54.597 { 00:05:54.597 "trtype": "tcp", 00:05:54.597 "method": "nvmf_get_transports", 00:05:54.597 "req_id": 1 00:05:54.597 } 00:05:54.597 Got JSON-RPC error response 00:05:54.597 response: 00:05:54.597 { 00:05:54.597 "code": -19, 00:05:54.597 "message": "No such device" 00:05:54.597 } 00:05:54.597 17:52:43 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:54.597 17:52:43 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:54.597 17:52:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.597 17:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:54.597 [2024-04-15 17:52:43.428216] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.597 17:52:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.597 17:52:43 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:54.597 17:52:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.597 17:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:54.857 17:52:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.857 17:52:43 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:54.857 { 00:05:54.857 "subsystems": [ 00:05:54.857 { 00:05:54.857 "subsystem": "vfio_user_target", 00:05:54.857 "config": null 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "subsystem": "keyring", 00:05:54.857 "config": [] 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "subsystem": "iobuf", 00:05:54.857 "config": [ 00:05:54.857 { 00:05:54.857 "method": "iobuf_set_options", 00:05:54.857 "params": { 00:05:54.857 "small_pool_count": 8192, 00:05:54.857 "large_pool_count": 1024, 00:05:54.857 "small_bufsize": 8192, 00:05:54.857 "large_bufsize": 135168 00:05:54.857 } 00:05:54.857 } 00:05:54.857 ] 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "subsystem": "sock", 00:05:54.857 "config": [ 00:05:54.857 { 00:05:54.857 "method": "sock_impl_set_options", 00:05:54.857 "params": { 00:05:54.857 "impl_name": "posix", 00:05:54.857 "recv_buf_size": 2097152, 00:05:54.857 "send_buf_size": 2097152, 00:05:54.857 "enable_recv_pipe": true, 00:05:54.857 "enable_quickack": false, 00:05:54.857 "enable_placement_id": 0, 00:05:54.857 "enable_zerocopy_send_server": true, 00:05:54.857 "enable_zerocopy_send_client": false, 00:05:54.857 "zerocopy_threshold": 0, 00:05:54.857 "tls_version": 0, 00:05:54.857 "enable_ktls": false 00:05:54.857 } 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "method": "sock_impl_set_options", 00:05:54.857 "params": { 00:05:54.857 "impl_name": "ssl", 00:05:54.857 "recv_buf_size": 4096, 00:05:54.857 "send_buf_size": 4096, 00:05:54.857 "enable_recv_pipe": true, 00:05:54.857 "enable_quickack": false, 00:05:54.857 "enable_placement_id": 0, 00:05:54.857 "enable_zerocopy_send_server": true, 00:05:54.857 "enable_zerocopy_send_client": false, 00:05:54.857 "zerocopy_threshold": 0, 00:05:54.857 "tls_version": 0, 00:05:54.857 "enable_ktls": false 00:05:54.857 } 00:05:54.857 } 00:05:54.857 ] 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "subsystem": "vmd", 00:05:54.857 "config": [] 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "subsystem": "accel", 00:05:54.857 "config": [ 00:05:54.857 { 00:05:54.857 "method": "accel_set_options", 00:05:54.857 "params": { 00:05:54.857 "small_cache_size": 128, 00:05:54.857 "large_cache_size": 16, 00:05:54.857 "task_count": 2048, 00:05:54.857 "sequence_count": 2048, 00:05:54.857 "buf_count": 2048 00:05:54.857 } 00:05:54.857 } 00:05:54.857 ] 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "subsystem": "bdev", 00:05:54.857 "config": [ 00:05:54.857 { 00:05:54.857 "method": "bdev_set_options", 00:05:54.857 "params": { 00:05:54.857 "bdev_io_pool_size": 65535, 00:05:54.857 "bdev_io_cache_size": 256, 00:05:54.857 "bdev_auto_examine": true, 00:05:54.857 "iobuf_small_cache_size": 128, 00:05:54.857 "iobuf_large_cache_size": 16 00:05:54.857 } 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "method": "bdev_raid_set_options", 00:05:54.857 "params": { 00:05:54.857 "process_window_size_kb": 1024 00:05:54.857 } 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "method": "bdev_iscsi_set_options", 00:05:54.857 "params": { 00:05:54.857 "timeout_sec": 30 00:05:54.857 } 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "method": "bdev_nvme_set_options", 00:05:54.857 "params": { 00:05:54.857 "action_on_timeout": "none", 00:05:54.857 "timeout_us": 0, 00:05:54.857 "timeout_admin_us": 0, 00:05:54.857 "keep_alive_timeout_ms": 10000, 00:05:54.857 "arbitration_burst": 0, 00:05:54.857 "low_priority_weight": 0, 00:05:54.857 "medium_priority_weight": 0, 00:05:54.857 "high_priority_weight": 0, 00:05:54.857 "nvme_adminq_poll_period_us": 10000, 00:05:54.857 "nvme_ioq_poll_period_us": 0, 00:05:54.857 "io_queue_requests": 0, 00:05:54.857 "delay_cmd_submit": true, 00:05:54.857 "transport_retry_count": 4, 00:05:54.857 "bdev_retry_count": 3, 00:05:54.857 "transport_ack_timeout": 0, 00:05:54.857 "ctrlr_loss_timeout_sec": 0, 00:05:54.857 "reconnect_delay_sec": 0, 00:05:54.857 "fast_io_fail_timeout_sec": 0, 00:05:54.857 "disable_auto_failback": false, 00:05:54.857 "generate_uuids": false, 00:05:54.857 "transport_tos": 0, 00:05:54.857 "nvme_error_stat": false, 00:05:54.857 "rdma_srq_size": 0, 00:05:54.857 "io_path_stat": false, 00:05:54.857 "allow_accel_sequence": false, 00:05:54.857 "rdma_max_cq_size": 0, 00:05:54.857 "rdma_cm_event_timeout_ms": 0, 00:05:54.857 "dhchap_digests": [ 00:05:54.857 "sha256", 00:05:54.857 "sha384", 00:05:54.857 "sha512" 00:05:54.857 ], 00:05:54.857 "dhchap_dhgroups": [ 00:05:54.857 "null", 00:05:54.857 "ffdhe2048", 00:05:54.857 "ffdhe3072", 00:05:54.857 "ffdhe4096", 00:05:54.857 "ffdhe6144", 00:05:54.857 "ffdhe8192" 00:05:54.857 ] 00:05:54.857 } 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "method": "bdev_nvme_set_hotplug", 00:05:54.857 "params": { 00:05:54.857 "period_us": 100000, 00:05:54.857 "enable": false 00:05:54.857 } 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "method": "bdev_wait_for_examine" 00:05:54.857 } 00:05:54.857 ] 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "subsystem": "scsi", 00:05:54.857 "config": null 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "subsystem": "scheduler", 00:05:54.857 "config": [ 00:05:54.857 { 00:05:54.857 "method": "framework_set_scheduler", 00:05:54.857 "params": { 00:05:54.857 "name": "static" 00:05:54.857 } 00:05:54.857 } 00:05:54.857 ] 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "subsystem": "vhost_scsi", 00:05:54.857 "config": [] 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "subsystem": "vhost_blk", 00:05:54.857 "config": [] 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "subsystem": "ublk", 00:05:54.857 "config": [] 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "subsystem": "nbd", 00:05:54.857 "config": [] 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "subsystem": "nvmf", 00:05:54.857 "config": [ 00:05:54.857 { 00:05:54.857 "method": "nvmf_set_config", 00:05:54.857 "params": { 00:05:54.857 "discovery_filter": "match_any", 00:05:54.857 "admin_cmd_passthru": { 00:05:54.857 "identify_ctrlr": false 00:05:54.857 } 00:05:54.857 } 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "method": "nvmf_set_max_subsystems", 00:05:54.857 "params": { 00:05:54.857 "max_subsystems": 1024 00:05:54.857 } 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "method": "nvmf_set_crdt", 00:05:54.857 "params": { 00:05:54.857 "crdt1": 0, 00:05:54.857 "crdt2": 0, 00:05:54.857 "crdt3": 0 00:05:54.857 } 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "method": "nvmf_create_transport", 00:05:54.857 "params": { 00:05:54.857 "trtype": "TCP", 00:05:54.857 "max_queue_depth": 128, 00:05:54.857 "max_io_qpairs_per_ctrlr": 127, 00:05:54.857 "in_capsule_data_size": 4096, 00:05:54.857 "max_io_size": 131072, 00:05:54.857 "io_unit_size": 131072, 00:05:54.857 "max_aq_depth": 128, 00:05:54.857 "num_shared_buffers": 511, 00:05:54.857 "buf_cache_size": 4294967295, 00:05:54.857 "dif_insert_or_strip": false, 00:05:54.857 "zcopy": false, 00:05:54.857 "c2h_success": true, 00:05:54.857 "sock_priority": 0, 00:05:54.857 "abort_timeout_sec": 1, 00:05:54.857 "ack_timeout": 0 00:05:54.857 } 00:05:54.857 } 00:05:54.857 ] 00:05:54.857 }, 00:05:54.857 { 00:05:54.857 "subsystem": "iscsi", 00:05:54.857 "config": [ 00:05:54.857 { 00:05:54.857 "method": "iscsi_set_options", 00:05:54.857 "params": { 00:05:54.858 "node_base": "iqn.2016-06.io.spdk", 00:05:54.858 "max_sessions": 128, 00:05:54.858 "max_connections_per_session": 2, 00:05:54.858 "max_queue_depth": 64, 00:05:54.858 "default_time2wait": 2, 00:05:54.858 "default_time2retain": 20, 00:05:54.858 "first_burst_length": 8192, 00:05:54.858 "immediate_data": true, 00:05:54.858 "allow_duplicated_isid": false, 00:05:54.858 "error_recovery_level": 0, 00:05:54.858 "nop_timeout": 60, 00:05:54.858 "nop_in_interval": 30, 00:05:54.858 "disable_chap": false, 00:05:54.858 "require_chap": false, 00:05:54.858 "mutual_chap": false, 00:05:54.858 "chap_group": 0, 00:05:54.858 "max_large_datain_per_connection": 64, 00:05:54.858 "max_r2t_per_connection": 4, 00:05:54.858 "pdu_pool_size": 36864, 00:05:54.858 "immediate_data_pool_size": 16384, 00:05:54.858 "data_out_pool_size": 2048 00:05:54.858 } 00:05:54.858 } 00:05:54.858 ] 00:05:54.858 } 00:05:54.858 ] 00:05:54.858 } 00:05:54.858 17:52:43 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:54.858 17:52:43 -- rpc/skip_rpc.sh@40 -- # killprocess 3196662 00:05:54.858 17:52:43 -- common/autotest_common.sh@936 -- # '[' -z 3196662 ']' 00:05:54.858 17:52:43 -- common/autotest_common.sh@940 -- # kill -0 3196662 00:05:54.858 17:52:43 -- common/autotest_common.sh@941 -- # uname 00:05:54.858 17:52:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.858 17:52:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3196662 00:05:54.858 17:52:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.858 17:52:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.858 17:52:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3196662' 00:05:54.858 killing process with pid 3196662 00:05:54.858 17:52:43 -- common/autotest_common.sh@955 -- # kill 3196662 00:05:54.858 17:52:43 -- common/autotest_common.sh@960 -- # wait 3196662 00:05:55.116 17:52:44 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3196802 00:05:55.116 17:52:44 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:55.116 17:52:44 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:00.385 17:52:49 -- rpc/skip_rpc.sh@50 -- # killprocess 3196802 00:06:00.385 17:52:49 -- common/autotest_common.sh@936 -- # '[' -z 3196802 ']' 00:06:00.385 17:52:49 -- common/autotest_common.sh@940 -- # kill -0 3196802 00:06:00.385 17:52:49 -- common/autotest_common.sh@941 -- # uname 00:06:00.385 17:52:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:00.385 17:52:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3196802 00:06:00.385 17:52:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:00.385 17:52:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:00.385 17:52:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3196802' 00:06:00.385 killing process with pid 3196802 00:06:00.385 17:52:49 -- common/autotest_common.sh@955 -- # kill 3196802 00:06:00.385 17:52:49 -- common/autotest_common.sh@960 -- # wait 3196802 00:06:00.643 17:52:49 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:00.643 17:52:49 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:00.643 00:06:00.643 real 0m6.610s 00:06:00.643 user 0m6.387s 00:06:00.643 sys 0m0.784s 00:06:00.643 17:52:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.643 17:52:49 -- common/autotest_common.sh@10 -- # set +x 00:06:00.643 ************************************ 00:06:00.643 END TEST skip_rpc_with_json 00:06:00.643 ************************************ 00:06:00.643 17:52:49 -- rpc/skip_rpc.sh@62 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:00.643 17:52:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.643 17:52:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.643 17:52:49 -- common/autotest_common.sh@10 -- # set +x 00:06:00.901 ************************************ 00:06:00.901 START TEST skip_rpc_with_delay 00:06:00.901 ************************************ 00:06:00.901 17:52:49 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:06:00.901 17:52:49 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.901 17:52:49 -- common/autotest_common.sh@638 -- # local es=0 00:06:00.901 17:52:49 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.901 17:52:49 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.901 17:52:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:00.901 17:52:49 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.901 17:52:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:00.901 17:52:49 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.901 17:52:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:00.901 17:52:49 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.901 17:52:49 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:00.901 17:52:49 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.901 [2024-04-15 17:52:49.689539] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:00.901 [2024-04-15 17:52:49.689681] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:00.901 17:52:49 -- common/autotest_common.sh@641 -- # es=1 00:06:00.901 17:52:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:00.901 17:52:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:00.901 17:52:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:00.901 00:06:00.901 real 0m0.085s 00:06:00.901 user 0m0.051s 00:06:00.901 sys 0m0.033s 00:06:00.901 17:52:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.901 17:52:49 -- common/autotest_common.sh@10 -- # set +x 00:06:00.901 ************************************ 00:06:00.901 END TEST skip_rpc_with_delay 00:06:00.901 ************************************ 00:06:00.901 17:52:49 -- rpc/skip_rpc.sh@64 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:00.901 00:06:00.901 real 0m12.640s 00:06:00.901 user 0m11.756s 00:06:00.901 sys 0m1.434s 00:06:00.901 17:52:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.901 17:52:49 -- common/autotest_common.sh@10 -- # set +x 00:06:00.901 ************************************ 00:06:00.901 END TEST skip_rpc 00:06:00.901 ************************************ 00:06:00.901 17:52:49 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:00.901 17:52:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.901 17:52:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.901 17:52:49 -- common/autotest_common.sh@10 -- # set +x 00:06:01.161 ************************************ 00:06:01.161 START TEST rpc_client 00:06:01.161 ************************************ 00:06:01.161 17:52:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:01.161 * Looking for test storage... 00:06:01.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:01.161 17:52:49 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:01.161 OK 00:06:01.161 17:52:49 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:01.161 00:06:01.161 real 0m0.083s 00:06:01.161 user 0m0.032s 00:06:01.161 sys 0m0.057s 00:06:01.161 17:52:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.161 17:52:49 -- common/autotest_common.sh@10 -- # set +x 00:06:01.161 ************************************ 00:06:01.161 END TEST rpc_client 00:06:01.161 ************************************ 00:06:01.161 17:52:50 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:01.161 17:52:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.161 17:52:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.161 17:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:01.420 ************************************ 00:06:01.420 START TEST json_config 00:06:01.420 ************************************ 00:06:01.420 17:52:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:01.420 17:52:50 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.420 17:52:50 -- nvmf/common.sh@7 -- # uname -s 00:06:01.420 17:52:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.420 17:52:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.420 17:52:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.420 17:52:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.420 17:52:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.420 17:52:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.420 17:52:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.420 17:52:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.420 17:52:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.420 17:52:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.420 17:52:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:01.420 17:52:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:01.420 17:52:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.420 17:52:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.420 17:52:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:01.420 17:52:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.420 17:52:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.420 17:52:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.420 17:52:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.420 17:52:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.420 17:52:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.420 17:52:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.420 17:52:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.420 17:52:50 -- paths/export.sh@5 -- # export PATH 00:06:01.420 17:52:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.420 17:52:50 -- nvmf/common.sh@47 -- # : 0 00:06:01.420 17:52:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:01.420 17:52:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:01.420 17:52:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.420 17:52:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.420 17:52:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.420 17:52:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:01.420 17:52:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:01.420 17:52:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:01.420 17:52:50 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:01.420 17:52:50 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:01.420 17:52:50 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:01.420 17:52:50 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:01.420 17:52:50 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:01.420 17:52:50 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:01.420 17:52:50 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:01.420 17:52:50 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:01.420 17:52:50 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:01.420 17:52:50 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:01.420 17:52:50 -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:01.420 17:52:50 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:01.421 17:52:50 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:01.421 17:52:50 -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:01.421 17:52:50 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:01.421 17:52:50 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:01.421 INFO: JSON configuration test init 00:06:01.421 17:52:50 -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:01.421 17:52:50 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:01.421 17:52:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:01.421 17:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:01.421 17:52:50 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:01.421 17:52:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:01.421 17:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:01.421 17:52:50 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:01.421 17:52:50 -- json_config/common.sh@9 -- # local app=target 00:06:01.421 17:52:50 -- json_config/common.sh@10 -- # shift 00:06:01.421 17:52:50 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:01.421 17:52:50 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:01.421 17:52:50 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:01.421 17:52:50 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.421 17:52:50 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.421 17:52:50 -- json_config/common.sh@22 -- # app_pid["$app"]=3197628 00:06:01.421 17:52:50 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:01.421 17:52:50 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:01.421 Waiting for target to run... 00:06:01.421 17:52:50 -- json_config/common.sh@25 -- # waitforlisten 3197628 /var/tmp/spdk_tgt.sock 00:06:01.421 17:52:50 -- common/autotest_common.sh@817 -- # '[' -z 3197628 ']' 00:06:01.421 17:52:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:01.421 17:52:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:01.421 17:52:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:01.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:01.421 17:52:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:01.421 17:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:01.421 [2024-04-15 17:52:50.262182] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:06:01.421 [2024-04-15 17:52:50.262289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3197628 ] 00:06:01.421 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.034 [2024-04-15 17:52:50.839242] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.034 [2024-04-15 17:52:50.919308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.601 17:52:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:02.601 17:52:51 -- common/autotest_common.sh@850 -- # return 0 00:06:02.601 17:52:51 -- json_config/common.sh@26 -- # echo '' 00:06:02.601 00:06:02.601 17:52:51 -- json_config/json_config.sh@269 -- # create_accel_config 00:06:02.601 17:52:51 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:02.601 17:52:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:02.601 17:52:51 -- common/autotest_common.sh@10 -- # set +x 00:06:02.601 17:52:51 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:02.601 17:52:51 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:02.601 17:52:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:02.601 17:52:51 -- common/autotest_common.sh@10 -- # set +x 00:06:02.601 17:52:51 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:02.601 17:52:51 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:02.601 17:52:51 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:05.893 17:52:54 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:05.893 17:52:54 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:05.893 17:52:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:05.893 17:52:54 -- common/autotest_common.sh@10 -- # set +x 00:06:05.893 17:52:54 -- json_config/json_config.sh@45 -- # local ret=0 00:06:05.893 17:52:54 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:05.893 17:52:54 -- json_config/json_config.sh@46 -- # local enabled_types 00:06:05.893 17:52:54 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:05.893 17:52:54 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:05.893 17:52:54 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:05.893 17:52:54 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:05.893 17:52:54 -- json_config/json_config.sh@48 -- # local get_types 00:06:05.893 17:52:54 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:05.893 17:52:54 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:05.893 17:52:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:05.893 17:52:54 -- common/autotest_common.sh@10 -- # set +x 00:06:05.893 17:52:54 -- json_config/json_config.sh@55 -- # return 0 00:06:05.893 17:52:54 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:05.893 17:52:54 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:05.893 17:52:54 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:05.893 17:52:54 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:05.893 17:52:54 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:05.893 17:52:54 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:05.893 17:52:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:05.893 17:52:54 -- common/autotest_common.sh@10 -- # set +x 00:06:05.893 17:52:54 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:05.893 17:52:54 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:05.893 17:52:54 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:05.893 17:52:54 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:05.893 17:52:54 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:06.463 MallocForNvmf0 00:06:06.463 17:52:55 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:06.463 17:52:55 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:07.031 MallocForNvmf1 00:06:07.031 17:52:55 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:07.031 17:52:55 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:07.601 [2024-04-15 17:52:56.280823] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:07.601 17:52:56 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:07.601 17:52:56 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:08.170 17:52:56 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:08.170 17:52:56 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:08.430 17:52:57 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:08.430 17:52:57 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:08.999 17:52:57 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:08.999 17:52:57 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:09.569 [2024-04-15 17:52:58.359353] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:09.569 17:52:58 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:09.569 17:52:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:09.569 17:52:58 -- common/autotest_common.sh@10 -- # set +x 00:06:09.569 17:52:58 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:09.569 17:52:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:09.569 17:52:58 -- common/autotest_common.sh@10 -- # set +x 00:06:09.569 17:52:58 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:09.569 17:52:58 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:09.569 17:52:58 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:10.138 MallocBdevForConfigChangeCheck 00:06:10.138 17:52:59 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:10.138 17:52:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:10.138 17:52:59 -- common/autotest_common.sh@10 -- # set +x 00:06:10.138 17:52:59 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:10.138 17:52:59 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.705 17:52:59 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:10.705 INFO: shutting down applications... 00:06:10.705 17:52:59 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:10.705 17:52:59 -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:10.705 17:52:59 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:10.705 17:52:59 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:12.611 Calling clear_iscsi_subsystem 00:06:12.611 Calling clear_nvmf_subsystem 00:06:12.611 Calling clear_nbd_subsystem 00:06:12.611 Calling clear_ublk_subsystem 00:06:12.611 Calling clear_vhost_blk_subsystem 00:06:12.611 Calling clear_vhost_scsi_subsystem 00:06:12.611 Calling clear_bdev_subsystem 00:06:12.611 17:53:01 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:12.611 17:53:01 -- json_config/json_config.sh@343 -- # count=100 00:06:12.611 17:53:01 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:12.611 17:53:01 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:12.611 17:53:01 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:12.611 17:53:01 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:12.872 17:53:01 -- json_config/json_config.sh@345 -- # break 00:06:12.872 17:53:01 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:12.872 17:53:01 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:12.872 17:53:01 -- json_config/common.sh@31 -- # local app=target 00:06:12.872 17:53:01 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:12.872 17:53:01 -- json_config/common.sh@35 -- # [[ -n 3197628 ]] 00:06:12.872 17:53:01 -- json_config/common.sh@38 -- # kill -SIGINT 3197628 00:06:12.872 17:53:01 -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:12.872 17:53:01 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.872 17:53:01 -- json_config/common.sh@41 -- # kill -0 3197628 00:06:12.872 17:53:01 -- json_config/common.sh@45 -- # sleep 0.5 00:06:13.442 17:53:02 -- json_config/common.sh@40 -- # (( i++ )) 00:06:13.442 17:53:02 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.442 17:53:02 -- json_config/common.sh@41 -- # kill -0 3197628 00:06:13.442 17:53:02 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:13.442 17:53:02 -- json_config/common.sh@43 -- # break 00:06:13.442 17:53:02 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:13.442 17:53:02 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:13.442 SPDK target shutdown done 00:06:13.442 17:53:02 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:13.442 INFO: relaunching applications... 00:06:13.442 17:53:02 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:13.442 17:53:02 -- json_config/common.sh@9 -- # local app=target 00:06:13.442 17:53:02 -- json_config/common.sh@10 -- # shift 00:06:13.442 17:53:02 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:13.442 17:53:02 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:13.442 17:53:02 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:13.442 17:53:02 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.442 17:53:02 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.442 17:53:02 -- json_config/common.sh@22 -- # app_pid["$app"]=3199208 00:06:13.442 17:53:02 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:13.442 17:53:02 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:13.442 Waiting for target to run... 00:06:13.442 17:53:02 -- json_config/common.sh@25 -- # waitforlisten 3199208 /var/tmp/spdk_tgt.sock 00:06:13.442 17:53:02 -- common/autotest_common.sh@817 -- # '[' -z 3199208 ']' 00:06:13.442 17:53:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:13.442 17:53:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:13.442 17:53:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:13.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:13.442 17:53:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:13.442 17:53:02 -- common/autotest_common.sh@10 -- # set +x 00:06:13.442 [2024-04-15 17:53:02.277637] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:06:13.442 [2024-04-15 17:53:02.277758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199208 ] 00:06:13.442 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.012 [2024-04-15 17:53:02.912417] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.271 [2024-04-15 17:53:02.991115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.565 [2024-04-15 17:53:06.019931] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.565 [2024-04-15 17:53:06.052471] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:17.565 17:53:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:17.565 17:53:06 -- common/autotest_common.sh@850 -- # return 0 00:06:17.565 17:53:06 -- json_config/common.sh@26 -- # echo '' 00:06:17.565 00:06:17.566 17:53:06 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:17.566 17:53:06 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:17.566 INFO: Checking if target configuration is the same... 00:06:17.566 17:53:06 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.566 17:53:06 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:17.566 17:53:06 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.566 + '[' 2 -ne 2 ']' 00:06:17.566 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:17.566 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:17.566 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:17.566 +++ basename /dev/fd/62 00:06:17.566 ++ mktemp /tmp/62.XXX 00:06:17.566 + tmp_file_1=/tmp/62.7eY 00:06:17.566 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.566 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:17.566 + tmp_file_2=/tmp/spdk_tgt_config.json.Jiw 00:06:17.566 + ret=0 00:06:17.566 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:17.824 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:17.824 + diff -u /tmp/62.7eY /tmp/spdk_tgt_config.json.Jiw 00:06:17.824 + echo 'INFO: JSON config files are the same' 00:06:17.824 INFO: JSON config files are the same 00:06:17.824 + rm /tmp/62.7eY /tmp/spdk_tgt_config.json.Jiw 00:06:17.824 + exit 0 00:06:17.825 17:53:06 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:17.825 17:53:06 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:17.825 INFO: changing configuration and checking if this can be detected... 00:06:17.825 17:53:06 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:17.825 17:53:06 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:18.083 17:53:07 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.083 17:53:07 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:18.083 17:53:07 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:18.083 + '[' 2 -ne 2 ']' 00:06:18.083 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:18.083 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:18.083 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:18.083 +++ basename /dev/fd/62 00:06:18.083 ++ mktemp /tmp/62.XXX 00:06:18.083 + tmp_file_1=/tmp/62.7BP 00:06:18.083 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.083 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:18.083 + tmp_file_2=/tmp/spdk_tgt_config.json.6eq 00:06:18.083 + ret=0 00:06:18.083 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:18.653 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:18.653 + diff -u /tmp/62.7BP /tmp/spdk_tgt_config.json.6eq 00:06:18.653 + ret=1 00:06:18.653 + echo '=== Start of file: /tmp/62.7BP ===' 00:06:18.653 + cat /tmp/62.7BP 00:06:18.653 + echo '=== End of file: /tmp/62.7BP ===' 00:06:18.653 + echo '' 00:06:18.653 + echo '=== Start of file: /tmp/spdk_tgt_config.json.6eq ===' 00:06:18.653 + cat /tmp/spdk_tgt_config.json.6eq 00:06:18.653 + echo '=== End of file: /tmp/spdk_tgt_config.json.6eq ===' 00:06:18.653 + echo '' 00:06:18.653 + rm /tmp/62.7BP /tmp/spdk_tgt_config.json.6eq 00:06:18.653 + exit 1 00:06:18.653 17:53:07 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:18.653 INFO: configuration change detected. 00:06:18.653 17:53:07 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:18.653 17:53:07 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:18.653 17:53:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:18.653 17:53:07 -- common/autotest_common.sh@10 -- # set +x 00:06:18.653 17:53:07 -- json_config/json_config.sh@307 -- # local ret=0 00:06:18.653 17:53:07 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:18.653 17:53:07 -- json_config/json_config.sh@317 -- # [[ -n 3199208 ]] 00:06:18.653 17:53:07 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:18.653 17:53:07 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:18.653 17:53:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:18.653 17:53:07 -- common/autotest_common.sh@10 -- # set +x 00:06:18.653 17:53:07 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:18.653 17:53:07 -- json_config/json_config.sh@193 -- # uname -s 00:06:18.653 17:53:07 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:18.653 17:53:07 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:18.653 17:53:07 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:18.653 17:53:07 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:18.653 17:53:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:18.653 17:53:07 -- common/autotest_common.sh@10 -- # set +x 00:06:18.653 17:53:07 -- json_config/json_config.sh@323 -- # killprocess 3199208 00:06:18.653 17:53:07 -- common/autotest_common.sh@936 -- # '[' -z 3199208 ']' 00:06:18.653 17:53:07 -- common/autotest_common.sh@940 -- # kill -0 3199208 00:06:18.653 17:53:07 -- common/autotest_common.sh@941 -- # uname 00:06:18.653 17:53:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:18.653 17:53:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3199208 00:06:18.653 17:53:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:18.653 17:53:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:18.653 17:53:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3199208' 00:06:18.653 killing process with pid 3199208 00:06:18.653 17:53:07 -- common/autotest_common.sh@955 -- # kill 3199208 00:06:18.653 17:53:07 -- common/autotest_common.sh@960 -- # wait 3199208 00:06:20.583 17:53:09 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:20.583 17:53:09 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:20.583 17:53:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:20.583 17:53:09 -- common/autotest_common.sh@10 -- # set +x 00:06:20.583 17:53:09 -- json_config/json_config.sh@328 -- # return 0 00:06:20.583 17:53:09 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:20.583 INFO: Success 00:06:20.583 00:06:20.583 real 0m19.109s 00:06:20.583 user 0m23.282s 00:06:20.583 sys 0m2.779s 00:06:20.583 17:53:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.583 17:53:09 -- common/autotest_common.sh@10 -- # set +x 00:06:20.583 ************************************ 00:06:20.583 END TEST json_config 00:06:20.583 ************************************ 00:06:20.583 17:53:09 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:20.583 17:53:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:20.583 17:53:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.583 17:53:09 -- common/autotest_common.sh@10 -- # set +x 00:06:20.583 ************************************ 00:06:20.583 START TEST json_config_extra_key 00:06:20.583 ************************************ 00:06:20.583 17:53:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:20.583 17:53:09 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.583 17:53:09 -- nvmf/common.sh@7 -- # uname -s 00:06:20.583 17:53:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.583 17:53:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.583 17:53:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.583 17:53:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.583 17:53:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.583 17:53:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.583 17:53:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.583 17:53:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.583 17:53:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.583 17:53:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.583 17:53:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:20.583 17:53:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:20.583 17:53:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.583 17:53:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.584 17:53:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:20.584 17:53:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.584 17:53:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.584 17:53:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.584 17:53:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.584 17:53:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.584 17:53:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.584 17:53:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.584 17:53:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.584 17:53:09 -- paths/export.sh@5 -- # export PATH 00:06:20.584 17:53:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.584 17:53:09 -- nvmf/common.sh@47 -- # : 0 00:06:20.584 17:53:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:20.584 17:53:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:20.584 17:53:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.584 17:53:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.584 17:53:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.584 17:53:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:20.584 17:53:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:20.584 17:53:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:20.584 17:53:09 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:20.584 17:53:09 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:20.584 17:53:09 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:20.584 17:53:09 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:20.584 17:53:09 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:20.584 17:53:09 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:20.584 17:53:09 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:20.584 17:53:09 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:20.584 17:53:09 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:20.584 17:53:09 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:20.584 17:53:09 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:20.584 INFO: launching applications... 00:06:20.584 17:53:09 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:20.584 17:53:09 -- json_config/common.sh@9 -- # local app=target 00:06:20.584 17:53:09 -- json_config/common.sh@10 -- # shift 00:06:20.584 17:53:09 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:20.584 17:53:09 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:20.584 17:53:09 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:20.584 17:53:09 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.584 17:53:09 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.584 17:53:09 -- json_config/common.sh@22 -- # app_pid["$app"]=3200132 00:06:20.584 17:53:09 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:20.584 17:53:09 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:20.584 Waiting for target to run... 00:06:20.584 17:53:09 -- json_config/common.sh@25 -- # waitforlisten 3200132 /var/tmp/spdk_tgt.sock 00:06:20.584 17:53:09 -- common/autotest_common.sh@817 -- # '[' -z 3200132 ']' 00:06:20.584 17:53:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:20.584 17:53:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:20.584 17:53:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:20.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:20.584 17:53:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:20.584 17:53:09 -- common/autotest_common.sh@10 -- # set +x 00:06:20.584 [2024-04-15 17:53:09.479881] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:06:20.584 [2024-04-15 17:53:09.479978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200132 ] 00:06:20.584 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.152 [2024-04-15 17:53:09.853406] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.152 [2024-04-15 17:53:09.917325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.721 17:53:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:21.721 17:53:10 -- common/autotest_common.sh@850 -- # return 0 00:06:21.721 17:53:10 -- json_config/common.sh@26 -- # echo '' 00:06:21.721 00:06:21.721 17:53:10 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:21.721 INFO: shutting down applications... 00:06:21.721 17:53:10 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:21.721 17:53:10 -- json_config/common.sh@31 -- # local app=target 00:06:21.721 17:53:10 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:21.721 17:53:10 -- json_config/common.sh@35 -- # [[ -n 3200132 ]] 00:06:21.721 17:53:10 -- json_config/common.sh@38 -- # kill -SIGINT 3200132 00:06:21.721 17:53:10 -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:21.721 17:53:10 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.721 17:53:10 -- json_config/common.sh@41 -- # kill -0 3200132 00:06:21.721 17:53:10 -- json_config/common.sh@45 -- # sleep 0.5 00:06:22.288 17:53:11 -- json_config/common.sh@40 -- # (( i++ )) 00:06:22.288 17:53:11 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.288 17:53:11 -- json_config/common.sh@41 -- # kill -0 3200132 00:06:22.288 17:53:11 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:22.288 17:53:11 -- json_config/common.sh@43 -- # break 00:06:22.288 17:53:11 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:22.288 17:53:11 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:22.288 SPDK target shutdown done 00:06:22.288 17:53:11 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:22.288 Success 00:06:22.288 00:06:22.288 real 0m1.668s 00:06:22.288 user 0m1.719s 00:06:22.288 sys 0m0.466s 00:06:22.288 17:53:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:22.288 17:53:11 -- common/autotest_common.sh@10 -- # set +x 00:06:22.288 ************************************ 00:06:22.288 END TEST json_config_extra_key 00:06:22.288 ************************************ 00:06:22.288 17:53:11 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:22.288 17:53:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.288 17:53:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.288 17:53:11 -- common/autotest_common.sh@10 -- # set +x 00:06:22.288 ************************************ 00:06:22.288 START TEST alias_rpc 00:06:22.288 ************************************ 00:06:22.288 17:53:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:22.546 * Looking for test storage... 00:06:22.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:22.546 17:53:11 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:22.546 17:53:11 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3200453 00:06:22.546 17:53:11 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.546 17:53:11 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3200453 00:06:22.546 17:53:11 -- common/autotest_common.sh@817 -- # '[' -z 3200453 ']' 00:06:22.546 17:53:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.546 17:53:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:22.546 17:53:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.546 17:53:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:22.546 17:53:11 -- common/autotest_common.sh@10 -- # set +x 00:06:22.546 [2024-04-15 17:53:11.303957] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:06:22.546 [2024-04-15 17:53:11.304052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200453 ] 00:06:22.546 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.546 [2024-04-15 17:53:11.374922] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.546 [2024-04-15 17:53:11.467191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.806 17:53:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:22.806 17:53:11 -- common/autotest_common.sh@850 -- # return 0 00:06:22.806 17:53:11 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:23.375 17:53:12 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3200453 00:06:23.375 17:53:12 -- common/autotest_common.sh@936 -- # '[' -z 3200453 ']' 00:06:23.375 17:53:12 -- common/autotest_common.sh@940 -- # kill -0 3200453 00:06:23.375 17:53:12 -- common/autotest_common.sh@941 -- # uname 00:06:23.375 17:53:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:23.375 17:53:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3200453 00:06:23.375 17:53:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:23.375 17:53:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:23.375 17:53:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3200453' 00:06:23.375 killing process with pid 3200453 00:06:23.375 17:53:12 -- common/autotest_common.sh@955 -- # kill 3200453 00:06:23.375 17:53:12 -- common/autotest_common.sh@960 -- # wait 3200453 00:06:23.634 00:06:23.634 real 0m1.316s 00:06:23.634 user 0m1.459s 00:06:23.634 sys 0m0.447s 00:06:23.634 17:53:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:23.634 17:53:12 -- common/autotest_common.sh@10 -- # set +x 00:06:23.634 ************************************ 00:06:23.634 END TEST alias_rpc 00:06:23.634 ************************************ 00:06:23.634 17:53:12 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:23.634 17:53:12 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:23.634 17:53:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:23.634 17:53:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.634 17:53:12 -- common/autotest_common.sh@10 -- # set +x 00:06:23.894 ************************************ 00:06:23.894 START TEST spdkcli_tcp 00:06:23.894 ************************************ 00:06:23.894 17:53:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:23.894 * Looking for test storage... 00:06:23.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:23.894 17:53:12 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:23.894 17:53:12 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:23.894 17:53:12 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:23.894 17:53:12 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:23.894 17:53:12 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:23.894 17:53:12 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:23.894 17:53:12 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:23.894 17:53:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:23.894 17:53:12 -- common/autotest_common.sh@10 -- # set +x 00:06:23.894 17:53:12 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3200647 00:06:23.894 17:53:12 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:23.894 17:53:12 -- spdkcli/tcp.sh@27 -- # waitforlisten 3200647 00:06:23.894 17:53:12 -- common/autotest_common.sh@817 -- # '[' -z 3200647 ']' 00:06:23.894 17:53:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.894 17:53:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:23.894 17:53:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.894 17:53:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:23.894 17:53:12 -- common/autotest_common.sh@10 -- # set +x 00:06:23.894 [2024-04-15 17:53:12.791668] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:06:23.894 [2024-04-15 17:53:12.791856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200647 ] 00:06:24.153 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.153 [2024-04-15 17:53:12.900755] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.153 [2024-04-15 17:53:12.999128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.153 [2024-04-15 17:53:12.999133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.413 17:53:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:24.413 17:53:13 -- common/autotest_common.sh@850 -- # return 0 00:06:24.413 17:53:13 -- spdkcli/tcp.sh@31 -- # socat_pid=3200781 00:06:24.413 17:53:13 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:24.413 17:53:13 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:24.673 [ 00:06:24.673 "bdev_malloc_delete", 00:06:24.673 "bdev_malloc_create", 00:06:24.673 "bdev_null_resize", 00:06:24.673 "bdev_null_delete", 00:06:24.673 "bdev_null_create", 00:06:24.673 "bdev_nvme_cuse_unregister", 00:06:24.673 "bdev_nvme_cuse_register", 00:06:24.673 "bdev_opal_new_user", 00:06:24.673 "bdev_opal_set_lock_state", 00:06:24.673 "bdev_opal_delete", 00:06:24.673 "bdev_opal_get_info", 00:06:24.673 "bdev_opal_create", 00:06:24.673 "bdev_nvme_opal_revert", 00:06:24.673 "bdev_nvme_opal_init", 00:06:24.673 "bdev_nvme_send_cmd", 00:06:24.673 "bdev_nvme_get_path_iostat", 00:06:24.673 "bdev_nvme_get_mdns_discovery_info", 00:06:24.673 "bdev_nvme_stop_mdns_discovery", 00:06:24.673 "bdev_nvme_start_mdns_discovery", 00:06:24.673 "bdev_nvme_set_multipath_policy", 00:06:24.673 "bdev_nvme_set_preferred_path", 00:06:24.673 "bdev_nvme_get_io_paths", 00:06:24.673 "bdev_nvme_remove_error_injection", 00:06:24.673 "bdev_nvme_add_error_injection", 00:06:24.673 "bdev_nvme_get_discovery_info", 00:06:24.673 "bdev_nvme_stop_discovery", 00:06:24.673 "bdev_nvme_start_discovery", 00:06:24.673 "bdev_nvme_get_controller_health_info", 00:06:24.673 "bdev_nvme_disable_controller", 00:06:24.673 "bdev_nvme_enable_controller", 00:06:24.673 "bdev_nvme_reset_controller", 00:06:24.673 "bdev_nvme_get_transport_statistics", 00:06:24.673 "bdev_nvme_apply_firmware", 00:06:24.673 "bdev_nvme_detach_controller", 00:06:24.673 "bdev_nvme_get_controllers", 00:06:24.673 "bdev_nvme_attach_controller", 00:06:24.673 "bdev_nvme_set_hotplug", 00:06:24.673 "bdev_nvme_set_options", 00:06:24.673 "bdev_passthru_delete", 00:06:24.673 "bdev_passthru_create", 00:06:24.673 "bdev_lvol_grow_lvstore", 00:06:24.673 "bdev_lvol_get_lvols", 00:06:24.673 "bdev_lvol_get_lvstores", 00:06:24.673 "bdev_lvol_delete", 00:06:24.673 "bdev_lvol_set_read_only", 00:06:24.673 "bdev_lvol_resize", 00:06:24.673 "bdev_lvol_decouple_parent", 00:06:24.673 "bdev_lvol_inflate", 00:06:24.673 "bdev_lvol_rename", 00:06:24.673 "bdev_lvol_clone_bdev", 00:06:24.673 "bdev_lvol_clone", 00:06:24.673 "bdev_lvol_snapshot", 00:06:24.673 "bdev_lvol_create", 00:06:24.673 "bdev_lvol_delete_lvstore", 00:06:24.673 "bdev_lvol_rename_lvstore", 00:06:24.673 "bdev_lvol_create_lvstore", 00:06:24.673 "bdev_raid_set_options", 00:06:24.673 "bdev_raid_remove_base_bdev", 00:06:24.673 "bdev_raid_add_base_bdev", 00:06:24.673 "bdev_raid_delete", 00:06:24.673 "bdev_raid_create", 00:06:24.673 "bdev_raid_get_bdevs", 00:06:24.673 "bdev_error_inject_error", 00:06:24.673 "bdev_error_delete", 00:06:24.673 "bdev_error_create", 00:06:24.673 "bdev_split_delete", 00:06:24.673 "bdev_split_create", 00:06:24.673 "bdev_delay_delete", 00:06:24.673 "bdev_delay_create", 00:06:24.673 "bdev_delay_update_latency", 00:06:24.673 "bdev_zone_block_delete", 00:06:24.673 "bdev_zone_block_create", 00:06:24.673 "blobfs_create", 00:06:24.673 "blobfs_detect", 00:06:24.673 "blobfs_set_cache_size", 00:06:24.673 "bdev_aio_delete", 00:06:24.673 "bdev_aio_rescan", 00:06:24.673 "bdev_aio_create", 00:06:24.673 "bdev_ftl_set_property", 00:06:24.673 "bdev_ftl_get_properties", 00:06:24.673 "bdev_ftl_get_stats", 00:06:24.673 "bdev_ftl_unmap", 00:06:24.673 "bdev_ftl_unload", 00:06:24.673 "bdev_ftl_delete", 00:06:24.673 "bdev_ftl_load", 00:06:24.673 "bdev_ftl_create", 00:06:24.673 "bdev_virtio_attach_controller", 00:06:24.673 "bdev_virtio_scsi_get_devices", 00:06:24.673 "bdev_virtio_detach_controller", 00:06:24.673 "bdev_virtio_blk_set_hotplug", 00:06:24.673 "bdev_iscsi_delete", 00:06:24.673 "bdev_iscsi_create", 00:06:24.673 "bdev_iscsi_set_options", 00:06:24.673 "accel_error_inject_error", 00:06:24.673 "ioat_scan_accel_module", 00:06:24.673 "dsa_scan_accel_module", 00:06:24.673 "iaa_scan_accel_module", 00:06:24.673 "vfu_virtio_create_scsi_endpoint", 00:06:24.673 "vfu_virtio_scsi_remove_target", 00:06:24.673 "vfu_virtio_scsi_add_target", 00:06:24.673 "vfu_virtio_create_blk_endpoint", 00:06:24.673 "vfu_virtio_delete_endpoint", 00:06:24.673 "keyring_file_remove_key", 00:06:24.673 "keyring_file_add_key", 00:06:24.673 "iscsi_set_options", 00:06:24.673 "iscsi_get_auth_groups", 00:06:24.673 "iscsi_auth_group_remove_secret", 00:06:24.673 "iscsi_auth_group_add_secret", 00:06:24.673 "iscsi_delete_auth_group", 00:06:24.673 "iscsi_create_auth_group", 00:06:24.673 "iscsi_set_discovery_auth", 00:06:24.673 "iscsi_get_options", 00:06:24.673 "iscsi_target_node_request_logout", 00:06:24.673 "iscsi_target_node_set_redirect", 00:06:24.673 "iscsi_target_node_set_auth", 00:06:24.673 "iscsi_target_node_add_lun", 00:06:24.673 "iscsi_get_stats", 00:06:24.673 "iscsi_get_connections", 00:06:24.673 "iscsi_portal_group_set_auth", 00:06:24.673 "iscsi_start_portal_group", 00:06:24.673 "iscsi_delete_portal_group", 00:06:24.673 "iscsi_create_portal_group", 00:06:24.673 "iscsi_get_portal_groups", 00:06:24.673 "iscsi_delete_target_node", 00:06:24.673 "iscsi_target_node_remove_pg_ig_maps", 00:06:24.673 "iscsi_target_node_add_pg_ig_maps", 00:06:24.673 "iscsi_create_target_node", 00:06:24.673 "iscsi_get_target_nodes", 00:06:24.673 "iscsi_delete_initiator_group", 00:06:24.673 "iscsi_initiator_group_remove_initiators", 00:06:24.673 "iscsi_initiator_group_add_initiators", 00:06:24.673 "iscsi_create_initiator_group", 00:06:24.673 "iscsi_get_initiator_groups", 00:06:24.673 "nvmf_set_crdt", 00:06:24.673 "nvmf_set_config", 00:06:24.674 "nvmf_set_max_subsystems", 00:06:24.674 "nvmf_subsystem_get_listeners", 00:06:24.674 "nvmf_subsystem_get_qpairs", 00:06:24.674 "nvmf_subsystem_get_controllers", 00:06:24.674 "nvmf_get_stats", 00:06:24.674 "nvmf_get_transports", 00:06:24.674 "nvmf_create_transport", 00:06:24.674 "nvmf_get_targets", 00:06:24.674 "nvmf_delete_target", 00:06:24.674 "nvmf_create_target", 00:06:24.674 "nvmf_subsystem_allow_any_host", 00:06:24.674 "nvmf_subsystem_remove_host", 00:06:24.674 "nvmf_subsystem_add_host", 00:06:24.674 "nvmf_ns_remove_host", 00:06:24.674 "nvmf_ns_add_host", 00:06:24.674 "nvmf_subsystem_remove_ns", 00:06:24.674 "nvmf_subsystem_add_ns", 00:06:24.674 "nvmf_subsystem_listener_set_ana_state", 00:06:24.674 "nvmf_discovery_get_referrals", 00:06:24.674 "nvmf_discovery_remove_referral", 00:06:24.674 "nvmf_discovery_add_referral", 00:06:24.674 "nvmf_subsystem_remove_listener", 00:06:24.674 "nvmf_subsystem_add_listener", 00:06:24.674 "nvmf_delete_subsystem", 00:06:24.674 "nvmf_create_subsystem", 00:06:24.674 "nvmf_get_subsystems", 00:06:24.674 "env_dpdk_get_mem_stats", 00:06:24.674 "nbd_get_disks", 00:06:24.674 "nbd_stop_disk", 00:06:24.674 "nbd_start_disk", 00:06:24.674 "ublk_recover_disk", 00:06:24.674 "ublk_get_disks", 00:06:24.674 "ublk_stop_disk", 00:06:24.674 "ublk_start_disk", 00:06:24.674 "ublk_destroy_target", 00:06:24.674 "ublk_create_target", 00:06:24.674 "virtio_blk_create_transport", 00:06:24.674 "virtio_blk_get_transports", 00:06:24.674 "vhost_controller_set_coalescing", 00:06:24.674 "vhost_get_controllers", 00:06:24.674 "vhost_delete_controller", 00:06:24.674 "vhost_create_blk_controller", 00:06:24.674 "vhost_scsi_controller_remove_target", 00:06:24.674 "vhost_scsi_controller_add_target", 00:06:24.674 "vhost_start_scsi_controller", 00:06:24.674 "vhost_create_scsi_controller", 00:06:24.674 "thread_set_cpumask", 00:06:24.674 "framework_get_scheduler", 00:06:24.674 "framework_set_scheduler", 00:06:24.674 "framework_get_reactors", 00:06:24.674 "thread_get_io_channels", 00:06:24.674 "thread_get_pollers", 00:06:24.674 "thread_get_stats", 00:06:24.674 "framework_monitor_context_switch", 00:06:24.674 "spdk_kill_instance", 00:06:24.674 "log_enable_timestamps", 00:06:24.674 "log_get_flags", 00:06:24.674 "log_clear_flag", 00:06:24.674 "log_set_flag", 00:06:24.674 "log_get_level", 00:06:24.674 "log_set_level", 00:06:24.674 "log_get_print_level", 00:06:24.674 "log_set_print_level", 00:06:24.674 "framework_enable_cpumask_locks", 00:06:24.674 "framework_disable_cpumask_locks", 00:06:24.674 "framework_wait_init", 00:06:24.674 "framework_start_init", 00:06:24.674 "scsi_get_devices", 00:06:24.674 "bdev_get_histogram", 00:06:24.674 "bdev_enable_histogram", 00:06:24.674 "bdev_set_qos_limit", 00:06:24.674 "bdev_set_qd_sampling_period", 00:06:24.674 "bdev_get_bdevs", 00:06:24.674 "bdev_reset_iostat", 00:06:24.674 "bdev_get_iostat", 00:06:24.674 "bdev_examine", 00:06:24.674 "bdev_wait_for_examine", 00:06:24.674 "bdev_set_options", 00:06:24.674 "notify_get_notifications", 00:06:24.674 "notify_get_types", 00:06:24.674 "accel_get_stats", 00:06:24.674 "accel_set_options", 00:06:24.674 "accel_set_driver", 00:06:24.674 "accel_crypto_key_destroy", 00:06:24.674 "accel_crypto_keys_get", 00:06:24.674 "accel_crypto_key_create", 00:06:24.674 "accel_assign_opc", 00:06:24.674 "accel_get_module_info", 00:06:24.674 "accel_get_opc_assignments", 00:06:24.674 "vmd_rescan", 00:06:24.674 "vmd_remove_device", 00:06:24.674 "vmd_enable", 00:06:24.674 "sock_set_default_impl", 00:06:24.674 "sock_impl_set_options", 00:06:24.674 "sock_impl_get_options", 00:06:24.674 "iobuf_get_stats", 00:06:24.674 "iobuf_set_options", 00:06:24.674 "keyring_get_keys", 00:06:24.674 "framework_get_pci_devices", 00:06:24.674 "framework_get_config", 00:06:24.674 "framework_get_subsystems", 00:06:24.674 "vfu_tgt_set_base_path", 00:06:24.674 "trace_get_info", 00:06:24.674 "trace_get_tpoint_group_mask", 00:06:24.674 "trace_disable_tpoint_group", 00:06:24.674 "trace_enable_tpoint_group", 00:06:24.674 "trace_clear_tpoint_mask", 00:06:24.674 "trace_set_tpoint_mask", 00:06:24.674 "spdk_get_version", 00:06:24.674 "rpc_get_methods" 00:06:24.674 ] 00:06:24.674 17:53:13 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:24.674 17:53:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:24.674 17:53:13 -- common/autotest_common.sh@10 -- # set +x 00:06:24.674 17:53:13 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:24.674 17:53:13 -- spdkcli/tcp.sh@38 -- # killprocess 3200647 00:06:24.674 17:53:13 -- common/autotest_common.sh@936 -- # '[' -z 3200647 ']' 00:06:24.674 17:53:13 -- common/autotest_common.sh@940 -- # kill -0 3200647 00:06:24.674 17:53:13 -- common/autotest_common.sh@941 -- # uname 00:06:24.674 17:53:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:24.674 17:53:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3200647 00:06:24.674 17:53:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:24.674 17:53:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:24.674 17:53:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3200647' 00:06:24.674 killing process with pid 3200647 00:06:24.674 17:53:13 -- common/autotest_common.sh@955 -- # kill 3200647 00:06:24.674 17:53:13 -- common/autotest_common.sh@960 -- # wait 3200647 00:06:25.243 00:06:25.243 real 0m1.427s 00:06:25.243 user 0m2.622s 00:06:25.243 sys 0m0.550s 00:06:25.243 17:53:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.243 17:53:14 -- common/autotest_common.sh@10 -- # set +x 00:06:25.243 ************************************ 00:06:25.243 END TEST spdkcli_tcp 00:06:25.243 ************************************ 00:06:25.243 17:53:14 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:25.243 17:53:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.243 17:53:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.243 17:53:14 -- common/autotest_common.sh@10 -- # set +x 00:06:25.501 ************************************ 00:06:25.501 START TEST dpdk_mem_utility 00:06:25.501 ************************************ 00:06:25.501 17:53:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:25.501 * Looking for test storage... 00:06:25.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:25.501 17:53:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:25.501 17:53:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3200981 00:06:25.501 17:53:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:25.501 17:53:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3200981 00:06:25.501 17:53:14 -- common/autotest_common.sh@817 -- # '[' -z 3200981 ']' 00:06:25.501 17:53:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.501 17:53:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:25.501 17:53:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.501 17:53:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:25.501 17:53:14 -- common/autotest_common.sh@10 -- # set +x 00:06:25.501 [2024-04-15 17:53:14.352979] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:06:25.501 [2024-04-15 17:53:14.353081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200981 ] 00:06:25.501 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.501 [2024-04-15 17:53:14.420907] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.760 [2024-04-15 17:53:14.513039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.020 17:53:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:26.020 17:53:14 -- common/autotest_common.sh@850 -- # return 0 00:06:26.020 17:53:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:26.020 17:53:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:26.020 17:53:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:26.020 17:53:14 -- common/autotest_common.sh@10 -- # set +x 00:06:26.020 { 00:06:26.020 "filename": "/tmp/spdk_mem_dump.txt" 00:06:26.020 } 00:06:26.020 17:53:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:26.020 17:53:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:26.020 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:26.020 1 heaps totaling size 814.000000 MiB 00:06:26.020 size: 814.000000 MiB heap id: 0 00:06:26.020 end heaps---------- 00:06:26.020 8 mempools totaling size 598.116089 MiB 00:06:26.020 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:26.020 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:26.020 size: 84.521057 MiB name: bdev_io_3200981 00:06:26.020 size: 51.011292 MiB name: evtpool_3200981 00:06:26.021 size: 50.003479 MiB name: msgpool_3200981 00:06:26.021 size: 21.763794 MiB name: PDU_Pool 00:06:26.021 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:26.021 size: 0.026123 MiB name: Session_Pool 00:06:26.021 end mempools------- 00:06:26.021 6 memzones totaling size 4.142822 MiB 00:06:26.021 size: 1.000366 MiB name: RG_ring_0_3200981 00:06:26.021 size: 1.000366 MiB name: RG_ring_1_3200981 00:06:26.021 size: 1.000366 MiB name: RG_ring_4_3200981 00:06:26.021 size: 1.000366 MiB name: RG_ring_5_3200981 00:06:26.021 size: 0.125366 MiB name: RG_ring_2_3200981 00:06:26.021 size: 0.015991 MiB name: RG_ring_3_3200981 00:06:26.021 end memzones------- 00:06:26.021 17:53:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:26.021 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:26.021 list of free elements. size: 12.519348 MiB 00:06:26.021 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:26.021 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:26.021 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:26.021 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:26.021 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:26.021 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:26.021 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:26.021 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:26.021 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:26.021 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:26.021 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:26.021 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:26.021 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:26.021 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:26.021 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:26.021 list of standard malloc elements. size: 199.218079 MiB 00:06:26.021 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:26.021 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:26.021 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:26.021 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:26.021 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:26.021 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:26.021 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:26.021 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:26.021 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:26.021 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:26.021 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:26.021 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:26.021 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:26.021 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:26.021 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:26.021 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:26.021 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:26.021 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:26.021 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:26.021 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:26.021 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:26.021 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:26.021 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:26.021 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:26.021 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:26.021 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:26.021 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:26.021 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:26.021 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:26.021 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:26.021 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:26.021 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:26.021 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:26.021 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:26.021 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:26.021 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:26.021 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:26.021 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:26.021 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:26.021 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:26.021 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:26.021 list of memzone associated elements. size: 602.262573 MiB 00:06:26.021 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:26.021 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:26.021 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:26.021 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:26.021 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:26.021 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3200981_0 00:06:26.021 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:26.021 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3200981_0 00:06:26.021 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:26.021 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3200981_0 00:06:26.021 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:26.021 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:26.021 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:26.021 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:26.021 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:26.021 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3200981 00:06:26.021 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:26.021 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3200981 00:06:26.021 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:26.021 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3200981 00:06:26.021 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:26.021 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:26.021 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:26.021 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:26.021 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:26.021 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:26.021 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:26.021 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:26.021 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:26.021 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3200981 00:06:26.021 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:26.021 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3200981 00:06:26.021 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:26.021 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3200981 00:06:26.021 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:26.021 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3200981 00:06:26.021 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:26.021 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3200981 00:06:26.021 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:26.021 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:26.021 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:26.021 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:26.021 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:26.021 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:26.021 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:26.021 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3200981 00:06:26.021 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:26.021 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:26.021 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:26.021 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:26.021 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:26.021 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3200981 00:06:26.021 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:26.021 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:26.022 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:26.022 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3200981 00:06:26.022 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:26.022 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3200981 00:06:26.022 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:26.022 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:26.022 17:53:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:26.022 17:53:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3200981 00:06:26.022 17:53:14 -- common/autotest_common.sh@936 -- # '[' -z 3200981 ']' 00:06:26.022 17:53:14 -- common/autotest_common.sh@940 -- # kill -0 3200981 00:06:26.022 17:53:14 -- common/autotest_common.sh@941 -- # uname 00:06:26.022 17:53:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:26.022 17:53:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3200981 00:06:26.022 17:53:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:26.022 17:53:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:26.022 17:53:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3200981' 00:06:26.022 killing process with pid 3200981 00:06:26.022 17:53:14 -- common/autotest_common.sh@955 -- # kill 3200981 00:06:26.022 17:53:14 -- common/autotest_common.sh@960 -- # wait 3200981 00:06:26.591 00:06:26.591 real 0m1.115s 00:06:26.591 user 0m1.112s 00:06:26.591 sys 0m0.439s 00:06:26.591 17:53:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:26.591 17:53:15 -- common/autotest_common.sh@10 -- # set +x 00:06:26.591 ************************************ 00:06:26.591 END TEST dpdk_mem_utility 00:06:26.591 ************************************ 00:06:26.591 17:53:15 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:26.591 17:53:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.591 17:53:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.591 17:53:15 -- common/autotest_common.sh@10 -- # set +x 00:06:26.591 ************************************ 00:06:26.591 START TEST event 00:06:26.591 ************************************ 00:06:26.591 17:53:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:26.850 * Looking for test storage... 00:06:26.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:26.850 17:53:15 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:26.850 17:53:15 -- bdev/nbd_common.sh@6 -- # set -e 00:06:26.850 17:53:15 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:26.850 17:53:15 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:26.850 17:53:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.850 17:53:15 -- common/autotest_common.sh@10 -- # set +x 00:06:26.850 ************************************ 00:06:26.850 START TEST event_perf 00:06:26.850 ************************************ 00:06:26.850 17:53:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:26.850 Running I/O for 1 seconds...[2024-04-15 17:53:15.710520] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:06:26.850 [2024-04-15 17:53:15.710588] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3201189 ] 00:06:26.850 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.109 [2024-04-15 17:53:15.805012] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.109 [2024-04-15 17:53:15.898524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.109 [2024-04-15 17:53:15.898578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.109 [2024-04-15 17:53:15.898629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.109 [2024-04-15 17:53:15.898632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.109 [2024-04-15 17:53:15.898850] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:06:28.045 Running I/O for 1 seconds... 00:06:28.045 lcore 0: 201916 00:06:28.045 lcore 1: 201916 00:06:28.045 lcore 2: 201915 00:06:28.045 lcore 3: 201915 00:06:28.045 done. 00:06:28.045 00:06:28.045 real 0m1.293s 00:06:28.045 user 0m4.179s 00:06:28.045 sys 0m0.108s 00:06:28.045 17:53:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:28.045 17:53:16 -- common/autotest_common.sh@10 -- # set +x 00:06:28.045 ************************************ 00:06:28.045 END TEST event_perf 00:06:28.045 ************************************ 00:06:28.304 17:53:17 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:28.304 17:53:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:28.304 17:53:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.304 17:53:17 -- common/autotest_common.sh@10 -- # set +x 00:06:28.304 ************************************ 00:06:28.304 START TEST event_reactor 00:06:28.304 ************************************ 00:06:28.304 17:53:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:28.304 [2024-04-15 17:53:17.130033] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:06:28.304 [2024-04-15 17:53:17.130129] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3201357 ] 00:06:28.304 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.304 [2024-04-15 17:53:17.202652] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.564 [2024-04-15 17:53:17.297819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.564 [2024-04-15 17:53:17.297924] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:06:29.550 test_start 00:06:29.550 oneshot 00:06:29.550 tick 100 00:06:29.550 tick 100 00:06:29.550 tick 250 00:06:29.550 tick 100 00:06:29.550 tick 100 00:06:29.550 tick 100 00:06:29.550 tick 250 00:06:29.550 tick 500 00:06:29.550 tick 100 00:06:29.550 tick 100 00:06:29.550 tick 250 00:06:29.550 tick 100 00:06:29.550 tick 100 00:06:29.550 test_end 00:06:29.550 00:06:29.550 real 0m1.265s 00:06:29.550 user 0m1.167s 00:06:29.550 sys 0m0.093s 00:06:29.550 17:53:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.550 17:53:18 -- common/autotest_common.sh@10 -- # set +x 00:06:29.550 ************************************ 00:06:29.550 END TEST event_reactor 00:06:29.550 ************************************ 00:06:29.550 17:53:18 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:29.550 17:53:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:29.550 17:53:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.550 17:53:18 -- common/autotest_common.sh@10 -- # set +x 00:06:29.828 ************************************ 00:06:29.828 START TEST event_reactor_perf 00:06:29.828 ************************************ 00:06:29.828 17:53:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:29.828 [2024-04-15 17:53:18.553778] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:06:29.828 [2024-04-15 17:53:18.553928] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3201518 ] 00:06:29.828 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.828 [2024-04-15 17:53:18.657883] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.828 [2024-04-15 17:53:18.755043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.828 [2024-04-15 17:53:18.755137] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:06:31.205 test_start 00:06:31.205 test_end 00:06:31.205 Performance: 350558 events per second 00:06:31.205 00:06:31.205 real 0m1.308s 00:06:31.205 user 0m1.183s 00:06:31.205 sys 0m0.120s 00:06:31.205 17:53:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.205 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:06:31.205 ************************************ 00:06:31.205 END TEST event_reactor_perf 00:06:31.205 ************************************ 00:06:31.205 17:53:19 -- event/event.sh@49 -- # uname -s 00:06:31.205 17:53:19 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:31.205 17:53:19 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:31.205 17:53:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.205 17:53:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.205 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:06:31.205 ************************************ 00:06:31.205 START TEST event_scheduler 00:06:31.205 ************************************ 00:06:31.205 17:53:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:31.205 * Looking for test storage... 00:06:31.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:31.205 17:53:20 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:31.205 17:53:20 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3201831 00:06:31.205 17:53:20 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:31.205 17:53:20 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.205 17:53:20 -- scheduler/scheduler.sh@37 -- # waitforlisten 3201831 00:06:31.205 17:53:20 -- common/autotest_common.sh@817 -- # '[' -z 3201831 ']' 00:06:31.205 17:53:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.205 17:53:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:31.205 17:53:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.205 17:53:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:31.205 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.205 [2024-04-15 17:53:20.114081] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:06:31.205 [2024-04-15 17:53:20.114195] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3201831 ] 00:06:31.205 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.463 [2024-04-15 17:53:20.185686] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.463 [2024-04-15 17:53:20.285509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.463 [2024-04-15 17:53:20.285561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.463 [2024-04-15 17:53:20.285610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.463 [2024-04-15 17:53:20.285614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.722 17:53:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:31.722 17:53:20 -- common/autotest_common.sh@850 -- # return 0 00:06:31.722 17:53:20 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:31.722 17:53:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.722 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.722 POWER: Env isn't set yet! 00:06:31.722 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:31.722 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:31.722 POWER: Cannot get available frequencies of lcore 0 00:06:31.722 POWER: Attempting to initialise PSTAT power management... 00:06:31.722 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:31.722 POWER: Initialized successfully for lcore 0 power management 00:06:31.722 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:31.722 POWER: Initialized successfully for lcore 1 power management 00:06:31.722 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:31.722 POWER: Initialized successfully for lcore 2 power management 00:06:31.722 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:31.722 POWER: Initialized successfully for lcore 3 power management 00:06:31.722 17:53:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.722 17:53:20 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:31.722 17:53:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.722 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.722 [2024-04-15 17:53:20.579204] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:31.722 17:53:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.722 17:53:20 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:31.722 17:53:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.722 17:53:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.722 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.981 ************************************ 00:06:31.982 START TEST scheduler_create_thread 00:06:31.982 ************************************ 00:06:31.982 17:53:20 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:06:31.982 17:53:20 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:31.982 17:53:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.982 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.982 2 00:06:31.982 17:53:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.982 17:53:20 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:31.982 17:53:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.982 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.982 3 00:06:31.982 17:53:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.982 17:53:20 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:31.982 17:53:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.982 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.982 4 00:06:31.982 17:53:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.982 17:53:20 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:31.982 17:53:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.982 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.982 5 00:06:31.982 17:53:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.982 17:53:20 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:31.982 17:53:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.982 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.982 6 00:06:31.982 17:53:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.982 17:53:20 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:31.982 17:53:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.982 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.982 7 00:06:31.982 17:53:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.982 17:53:20 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:31.982 17:53:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.982 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.982 8 00:06:31.982 17:53:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.982 17:53:20 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:31.982 17:53:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.982 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.982 9 00:06:31.982 17:53:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.982 17:53:20 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:31.982 17:53:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.982 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.982 10 00:06:31.982 17:53:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.982 17:53:20 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:31.982 17:53:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.982 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.982 17:53:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.982 17:53:20 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:31.982 17:53:20 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:31.982 17:53:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.982 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:32.550 17:53:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:32.550 17:53:21 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:32.550 17:53:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:32.550 17:53:21 -- common/autotest_common.sh@10 -- # set +x 00:06:33.926 17:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:33.926 17:53:22 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:33.926 17:53:22 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:33.926 17:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:33.926 17:53:22 -- common/autotest_common.sh@10 -- # set +x 00:06:34.862 17:53:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.862 00:06:34.862 real 0m3.101s 00:06:34.862 user 0m0.012s 00:06:34.862 sys 0m0.002s 00:06:34.862 17:53:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.862 17:53:23 -- common/autotest_common.sh@10 -- # set +x 00:06:34.862 ************************************ 00:06:34.862 END TEST scheduler_create_thread 00:06:34.862 ************************************ 00:06:35.122 17:53:23 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:35.122 17:53:23 -- scheduler/scheduler.sh@46 -- # killprocess 3201831 00:06:35.122 17:53:23 -- common/autotest_common.sh@936 -- # '[' -z 3201831 ']' 00:06:35.122 17:53:23 -- common/autotest_common.sh@940 -- # kill -0 3201831 00:06:35.122 17:53:23 -- common/autotest_common.sh@941 -- # uname 00:06:35.122 17:53:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:35.122 17:53:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3201831 00:06:35.122 17:53:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:35.122 17:53:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:35.122 17:53:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3201831' 00:06:35.122 killing process with pid 3201831 00:06:35.122 17:53:23 -- common/autotest_common.sh@955 -- # kill 3201831 00:06:35.122 17:53:23 -- common/autotest_common.sh@960 -- # wait 3201831 00:06:35.381 [2024-04-15 17:53:24.195593] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:35.639 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:35.639 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:35.639 POWER: Power management governor of lcore 1 has been set to 'userspace' successfully 00:06:35.639 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:35.639 POWER: Power management governor of lcore 2 has been set to 'userspace' successfully 00:06:35.639 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:35.639 POWER: Power management governor of lcore 3 has been set to 'userspace' successfully 00:06:35.639 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:35.639 00:06:35.639 real 0m4.452s 00:06:35.639 user 0m7.654s 00:06:35.639 sys 0m0.465s 00:06:35.639 17:53:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:35.639 17:53:24 -- common/autotest_common.sh@10 -- # set +x 00:06:35.639 ************************************ 00:06:35.639 END TEST event_scheduler 00:06:35.639 ************************************ 00:06:35.639 17:53:24 -- event/event.sh@51 -- # modprobe -n nbd 00:06:35.639 17:53:24 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:35.639 17:53:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.639 17:53:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.639 17:53:24 -- common/autotest_common.sh@10 -- # set +x 00:06:35.639 ************************************ 00:06:35.639 START TEST app_repeat 00:06:35.639 ************************************ 00:06:35.639 17:53:24 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:06:35.639 17:53:24 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.639 17:53:24 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.639 17:53:24 -- event/event.sh@13 -- # local nbd_list 00:06:35.639 17:53:24 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.640 17:53:24 -- event/event.sh@14 -- # local bdev_list 00:06:35.640 17:53:24 -- event/event.sh@15 -- # local repeat_times=4 00:06:35.640 17:53:24 -- event/event.sh@17 -- # modprobe nbd 00:06:35.640 17:53:24 -- event/event.sh@19 -- # repeat_pid=3202381 00:06:35.640 17:53:24 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:35.640 17:53:24 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.640 17:53:24 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3202381' 00:06:35.640 Process app_repeat pid: 3202381 00:06:35.640 17:53:24 -- event/event.sh@23 -- # for i in {0..2} 00:06:35.640 17:53:24 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:35.640 spdk_app_start Round 0 00:06:35.640 17:53:24 -- event/event.sh@25 -- # waitforlisten 3202381 /var/tmp/spdk-nbd.sock 00:06:35.640 17:53:24 -- common/autotest_common.sh@817 -- # '[' -z 3202381 ']' 00:06:35.640 17:53:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.640 17:53:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:35.640 17:53:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.640 17:53:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:35.640 17:53:24 -- common/autotest_common.sh@10 -- # set +x 00:06:35.899 [2024-04-15 17:53:24.606786] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:06:35.899 [2024-04-15 17:53:24.606853] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3202381 ] 00:06:35.899 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.899 [2024-04-15 17:53:24.674567] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.899 [2024-04-15 17:53:24.768601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.899 [2024-04-15 17:53:24.768607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.158 17:53:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:36.158 17:53:24 -- common/autotest_common.sh@850 -- # return 0 00:06:36.158 17:53:24 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.416 Malloc0 00:06:36.416 17:53:25 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.675 Malloc1 00:06:36.675 17:53:25 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.675 17:53:25 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.675 17:53:25 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.675 17:53:25 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.675 17:53:25 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.675 17:53:25 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.675 17:53:25 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.675 17:53:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.675 17:53:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.675 17:53:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.675 17:53:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.675 17:53:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.675 17:53:25 -- bdev/nbd_common.sh@12 -- # local i 00:06:36.675 17:53:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.675 17:53:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.675 17:53:25 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:37.242 /dev/nbd0 00:06:37.242 17:53:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.242 17:53:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.242 17:53:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:37.242 17:53:25 -- common/autotest_common.sh@855 -- # local i 00:06:37.242 17:53:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:37.242 17:53:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:37.242 17:53:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:37.242 17:53:25 -- common/autotest_common.sh@859 -- # break 00:06:37.242 17:53:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:37.242 17:53:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:37.242 17:53:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.242 1+0 records in 00:06:37.242 1+0 records out 00:06:37.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024261 s, 16.9 MB/s 00:06:37.242 17:53:25 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.242 17:53:25 -- common/autotest_common.sh@872 -- # size=4096 00:06:37.242 17:53:25 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.242 17:53:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:37.242 17:53:25 -- common/autotest_common.sh@875 -- # return 0 00:06:37.242 17:53:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.242 17:53:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.242 17:53:25 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.810 /dev/nbd1 00:06:37.810 17:53:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.810 17:53:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.810 17:53:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:37.810 17:53:26 -- common/autotest_common.sh@855 -- # local i 00:06:37.810 17:53:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:37.810 17:53:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:37.810 17:53:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:37.810 17:53:26 -- common/autotest_common.sh@859 -- # break 00:06:37.810 17:53:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:37.810 17:53:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:37.810 17:53:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.810 1+0 records in 00:06:37.810 1+0 records out 00:06:37.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198207 s, 20.7 MB/s 00:06:37.810 17:53:26 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.810 17:53:26 -- common/autotest_common.sh@872 -- # size=4096 00:06:37.810 17:53:26 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.810 17:53:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:37.810 17:53:26 -- common/autotest_common.sh@875 -- # return 0 00:06:37.810 17:53:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.810 17:53:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.810 17:53:26 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.810 17:53:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.810 17:53:26 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.068 17:53:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:38.068 { 00:06:38.068 "nbd_device": "/dev/nbd0", 00:06:38.068 "bdev_name": "Malloc0" 00:06:38.068 }, 00:06:38.068 { 00:06:38.068 "nbd_device": "/dev/nbd1", 00:06:38.068 "bdev_name": "Malloc1" 00:06:38.068 } 00:06:38.068 ]' 00:06:38.068 17:53:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:38.068 { 00:06:38.069 "nbd_device": "/dev/nbd0", 00:06:38.069 "bdev_name": "Malloc0" 00:06:38.069 }, 00:06:38.069 { 00:06:38.069 "nbd_device": "/dev/nbd1", 00:06:38.069 "bdev_name": "Malloc1" 00:06:38.069 } 00:06:38.069 ]' 00:06:38.069 17:53:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:38.327 /dev/nbd1' 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:38.327 /dev/nbd1' 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@65 -- # count=2 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@95 -- # count=2 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:38.327 256+0 records in 00:06:38.327 256+0 records out 00:06:38.327 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467312 s, 224 MB/s 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:38.327 256+0 records in 00:06:38.327 256+0 records out 00:06:38.327 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236138 s, 44.4 MB/s 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:38.327 256+0 records in 00:06:38.327 256+0 records out 00:06:38.327 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260323 s, 40.3 MB/s 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@51 -- # local i 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.327 17:53:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.586 17:53:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.586 17:53:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.586 17:53:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.586 17:53:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.586 17:53:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.586 17:53:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.586 17:53:27 -- bdev/nbd_common.sh@41 -- # break 00:06:38.586 17:53:27 -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.586 17:53:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.586 17:53:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.845 17:53:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.845 17:53:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.845 17:53:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.845 17:53:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.845 17:53:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.845 17:53:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.845 17:53:27 -- bdev/nbd_common.sh@41 -- # break 00:06:38.845 17:53:27 -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.104 17:53:27 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.104 17:53:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.104 17:53:27 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.362 17:53:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.362 17:53:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.362 17:53:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.362 17:53:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.362 17:53:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.362 17:53:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.362 17:53:28 -- bdev/nbd_common.sh@65 -- # true 00:06:39.362 17:53:28 -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.362 17:53:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.362 17:53:28 -- bdev/nbd_common.sh@104 -- # count=0 00:06:39.362 17:53:28 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:39.362 17:53:28 -- bdev/nbd_common.sh@109 -- # return 0 00:06:39.362 17:53:28 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:39.930 17:53:28 -- event/event.sh@35 -- # sleep 3 00:06:39.930 [2024-04-15 17:53:28.863495] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.188 [2024-04-15 17:53:28.954622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.188 [2024-04-15 17:53:28.954625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.188 [2024-04-15 17:53:29.017386] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.188 [2024-04-15 17:53:29.017465] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.718 17:53:31 -- event/event.sh@23 -- # for i in {0..2} 00:06:42.718 17:53:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:42.718 spdk_app_start Round 1 00:06:42.718 17:53:31 -- event/event.sh@25 -- # waitforlisten 3202381 /var/tmp/spdk-nbd.sock 00:06:42.718 17:53:31 -- common/autotest_common.sh@817 -- # '[' -z 3202381 ']' 00:06:42.718 17:53:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.718 17:53:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:42.719 17:53:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.719 17:53:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:42.719 17:53:31 -- common/autotest_common.sh@10 -- # set +x 00:06:43.285 17:53:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:43.285 17:53:32 -- common/autotest_common.sh@850 -- # return 0 00:06:43.285 17:53:32 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.851 Malloc0 00:06:43.851 17:53:32 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.110 Malloc1 00:06:44.110 17:53:32 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.110 17:53:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.110 17:53:32 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.110 17:53:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:44.110 17:53:32 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.110 17:53:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:44.110 17:53:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.110 17:53:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.110 17:53:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.110 17:53:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.110 17:53:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.110 17:53:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.110 17:53:32 -- bdev/nbd_common.sh@12 -- # local i 00:06:44.110 17:53:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.110 17:53:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.110 17:53:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:44.676 /dev/nbd0 00:06:44.676 17:53:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:44.676 17:53:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:44.676 17:53:33 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:44.676 17:53:33 -- common/autotest_common.sh@855 -- # local i 00:06:44.676 17:53:33 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:44.676 17:53:33 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:44.676 17:53:33 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:44.676 17:53:33 -- common/autotest_common.sh@859 -- # break 00:06:44.676 17:53:33 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:44.676 17:53:33 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:44.676 17:53:33 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.676 1+0 records in 00:06:44.676 1+0 records out 00:06:44.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000169606 s, 24.2 MB/s 00:06:44.676 17:53:33 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.676 17:53:33 -- common/autotest_common.sh@872 -- # size=4096 00:06:44.676 17:53:33 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.676 17:53:33 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:44.676 17:53:33 -- common/autotest_common.sh@875 -- # return 0 00:06:44.676 17:53:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.676 17:53:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.676 17:53:33 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:44.934 /dev/nbd1 00:06:44.934 17:53:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:44.934 17:53:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:44.934 17:53:33 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:44.934 17:53:33 -- common/autotest_common.sh@855 -- # local i 00:06:44.934 17:53:33 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:44.934 17:53:33 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:44.934 17:53:33 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:44.934 17:53:33 -- common/autotest_common.sh@859 -- # break 00:06:44.934 17:53:33 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:44.934 17:53:33 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:44.934 17:53:33 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.934 1+0 records in 00:06:44.934 1+0 records out 00:06:44.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205197 s, 20.0 MB/s 00:06:44.934 17:53:33 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.934 17:53:33 -- common/autotest_common.sh@872 -- # size=4096 00:06:44.934 17:53:33 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.934 17:53:33 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:44.934 17:53:33 -- common/autotest_common.sh@875 -- # return 0 00:06:44.934 17:53:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.934 17:53:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.934 17:53:33 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.934 17:53:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.934 17:53:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.192 17:53:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.192 { 00:06:45.192 "nbd_device": "/dev/nbd0", 00:06:45.193 "bdev_name": "Malloc0" 00:06:45.193 }, 00:06:45.193 { 00:06:45.193 "nbd_device": "/dev/nbd1", 00:06:45.193 "bdev_name": "Malloc1" 00:06:45.193 } 00:06:45.193 ]' 00:06:45.193 17:53:34 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.193 { 00:06:45.193 "nbd_device": "/dev/nbd0", 00:06:45.193 "bdev_name": "Malloc0" 00:06:45.193 }, 00:06:45.193 { 00:06:45.193 "nbd_device": "/dev/nbd1", 00:06:45.193 "bdev_name": "Malloc1" 00:06:45.193 } 00:06:45.193 ]' 00:06:45.193 17:53:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.193 17:53:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.193 /dev/nbd1' 00:06:45.193 17:53:34 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.193 /dev/nbd1' 00:06:45.193 17:53:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.193 17:53:34 -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.193 17:53:34 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.193 17:53:34 -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.193 17:53:34 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.193 17:53:34 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.193 17:53:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.193 17:53:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.193 17:53:34 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.193 17:53:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.193 17:53:34 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.193 17:53:34 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.450 256+0 records in 00:06:45.450 256+0 records out 00:06:45.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00774587 s, 135 MB/s 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:45.450 256+0 records in 00:06:45.450 256+0 records out 00:06:45.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261738 s, 40.1 MB/s 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.450 256+0 records in 00:06:45.450 256+0 records out 00:06:45.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272704 s, 38.5 MB/s 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@51 -- # local i 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.450 17:53:34 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.708 17:53:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.708 17:53:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.708 17:53:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.708 17:53:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.708 17:53:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.708 17:53:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.708 17:53:34 -- bdev/nbd_common.sh@41 -- # break 00:06:45.708 17:53:34 -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.708 17:53:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.708 17:53:34 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.966 17:53:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.966 17:53:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.966 17:53:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.966 17:53:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.966 17:53:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.966 17:53:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.966 17:53:34 -- bdev/nbd_common.sh@41 -- # break 00:06:45.966 17:53:34 -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.966 17:53:34 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.966 17:53:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.966 17:53:34 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.533 17:53:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.533 17:53:35 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.533 17:53:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.533 17:53:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.533 17:53:35 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.533 17:53:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.533 17:53:35 -- bdev/nbd_common.sh@65 -- # true 00:06:46.533 17:53:35 -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.533 17:53:35 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.533 17:53:35 -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.533 17:53:35 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.533 17:53:35 -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.533 17:53:35 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.792 17:53:35 -- event/event.sh@35 -- # sleep 3 00:06:47.050 [2024-04-15 17:53:35.847679] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.050 [2024-04-15 17:53:35.938547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.050 [2024-04-15 17:53:35.938551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.050 [2024-04-15 17:53:36.001880] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.050 [2024-04-15 17:53:36.001959] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:50.337 17:53:38 -- event/event.sh@23 -- # for i in {0..2} 00:06:50.337 17:53:38 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:50.337 spdk_app_start Round 2 00:06:50.337 17:53:38 -- event/event.sh@25 -- # waitforlisten 3202381 /var/tmp/spdk-nbd.sock 00:06:50.337 17:53:38 -- common/autotest_common.sh@817 -- # '[' -z 3202381 ']' 00:06:50.337 17:53:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.337 17:53:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:50.337 17:53:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.337 17:53:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:50.337 17:53:38 -- common/autotest_common.sh@10 -- # set +x 00:06:50.337 17:53:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:50.337 17:53:38 -- common/autotest_common.sh@850 -- # return 0 00:06:50.337 17:53:38 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.625 Malloc0 00:06:50.625 17:53:39 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.884 Malloc1 00:06:50.884 17:53:39 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.884 17:53:39 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.884 17:53:39 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.884 17:53:39 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:50.884 17:53:39 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.884 17:53:39 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:50.884 17:53:39 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.884 17:53:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.884 17:53:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.884 17:53:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.884 17:53:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.884 17:53:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.884 17:53:39 -- bdev/nbd_common.sh@12 -- # local i 00:06:50.884 17:53:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.884 17:53:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.884 17:53:39 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.450 /dev/nbd0 00:06:51.450 17:53:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.450 17:53:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.450 17:53:40 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:51.450 17:53:40 -- common/autotest_common.sh@855 -- # local i 00:06:51.450 17:53:40 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:51.450 17:53:40 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:51.450 17:53:40 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:51.450 17:53:40 -- common/autotest_common.sh@859 -- # break 00:06:51.450 17:53:40 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:51.450 17:53:40 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:51.450 17:53:40 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.450 1+0 records in 00:06:51.450 1+0 records out 00:06:51.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192719 s, 21.3 MB/s 00:06:51.450 17:53:40 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.450 17:53:40 -- common/autotest_common.sh@872 -- # size=4096 00:06:51.450 17:53:40 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.450 17:53:40 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:51.450 17:53:40 -- common/autotest_common.sh@875 -- # return 0 00:06:51.450 17:53:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.450 17:53:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.450 17:53:40 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:51.708 /dev/nbd1 00:06:51.708 17:53:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.708 17:53:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.708 17:53:40 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:51.708 17:53:40 -- common/autotest_common.sh@855 -- # local i 00:06:51.708 17:53:40 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:51.708 17:53:40 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:51.708 17:53:40 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:51.708 17:53:40 -- common/autotest_common.sh@859 -- # break 00:06:51.708 17:53:40 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:51.708 17:53:40 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:51.708 17:53:40 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.708 1+0 records in 00:06:51.708 1+0 records out 00:06:51.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260598 s, 15.7 MB/s 00:06:51.708 17:53:40 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.709 17:53:40 -- common/autotest_common.sh@872 -- # size=4096 00:06:51.709 17:53:40 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.709 17:53:40 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:51.709 17:53:40 -- common/autotest_common.sh@875 -- # return 0 00:06:51.709 17:53:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.709 17:53:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.709 17:53:40 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.709 17:53:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.709 17:53:40 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.967 { 00:06:51.967 "nbd_device": "/dev/nbd0", 00:06:51.967 "bdev_name": "Malloc0" 00:06:51.967 }, 00:06:51.967 { 00:06:51.967 "nbd_device": "/dev/nbd1", 00:06:51.967 "bdev_name": "Malloc1" 00:06:51.967 } 00:06:51.967 ]' 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.967 { 00:06:51.967 "nbd_device": "/dev/nbd0", 00:06:51.967 "bdev_name": "Malloc0" 00:06:51.967 }, 00:06:51.967 { 00:06:51.967 "nbd_device": "/dev/nbd1", 00:06:51.967 "bdev_name": "Malloc1" 00:06:51.967 } 00:06:51.967 ]' 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:51.967 /dev/nbd1' 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:51.967 /dev/nbd1' 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@65 -- # count=2 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@95 -- # count=2 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:51.967 256+0 records in 00:06:51.967 256+0 records out 00:06:51.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516473 s, 203 MB/s 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.967 17:53:40 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.226 256+0 records in 00:06:52.226 256+0 records out 00:06:52.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235677 s, 44.5 MB/s 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.226 256+0 records in 00:06:52.226 256+0 records out 00:06:52.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271117 s, 38.7 MB/s 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@51 -- # local i 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.226 17:53:40 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.794 17:53:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.794 17:53:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.794 17:53:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.794 17:53:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.794 17:53:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.794 17:53:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.794 17:53:41 -- bdev/nbd_common.sh@41 -- # break 00:06:52.794 17:53:41 -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.794 17:53:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.794 17:53:41 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.052 17:53:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.052 17:53:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.052 17:53:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.052 17:53:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.052 17:53:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.052 17:53:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.052 17:53:41 -- bdev/nbd_common.sh@41 -- # break 00:06:53.052 17:53:41 -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.052 17:53:41 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.052 17:53:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.052 17:53:41 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.618 17:53:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.618 17:53:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.618 17:53:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.618 17:53:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.618 17:53:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.618 17:53:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.618 17:53:42 -- bdev/nbd_common.sh@65 -- # true 00:06:53.618 17:53:42 -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.618 17:53:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.618 17:53:42 -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.618 17:53:42 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.618 17:53:42 -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.618 17:53:42 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:53.878 17:53:42 -- event/event.sh@35 -- # sleep 3 00:06:54.137 [2024-04-15 17:53:42.944152] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.137 [2024-04-15 17:53:43.035615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.137 [2024-04-15 17:53:43.035620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.396 [2024-04-15 17:53:43.097341] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:54.396 [2024-04-15 17:53:43.097408] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:56.932 17:53:45 -- event/event.sh@38 -- # waitforlisten 3202381 /var/tmp/spdk-nbd.sock 00:06:56.932 17:53:45 -- common/autotest_common.sh@817 -- # '[' -z 3202381 ']' 00:06:56.932 17:53:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.932 17:53:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:56.932 17:53:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.932 17:53:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:56.932 17:53:45 -- common/autotest_common.sh@10 -- # set +x 00:06:57.191 17:53:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:57.191 17:53:45 -- common/autotest_common.sh@850 -- # return 0 00:06:57.191 17:53:45 -- event/event.sh@39 -- # killprocess 3202381 00:06:57.191 17:53:45 -- common/autotest_common.sh@936 -- # '[' -z 3202381 ']' 00:06:57.191 17:53:45 -- common/autotest_common.sh@940 -- # kill -0 3202381 00:06:57.191 17:53:45 -- common/autotest_common.sh@941 -- # uname 00:06:57.191 17:53:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:57.191 17:53:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3202381 00:06:57.191 17:53:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:57.191 17:53:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:57.191 17:53:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3202381' 00:06:57.191 killing process with pid 3202381 00:06:57.191 17:53:46 -- common/autotest_common.sh@955 -- # kill 3202381 00:06:57.191 17:53:46 -- common/autotest_common.sh@960 -- # wait 3202381 00:06:57.450 spdk_app_start is called in Round 0. 00:06:57.450 Shutdown signal received, stop current app iteration 00:06:57.450 Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 reinitialization... 00:06:57.450 spdk_app_start is called in Round 1. 00:06:57.450 Shutdown signal received, stop current app iteration 00:06:57.450 Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 reinitialization... 00:06:57.450 spdk_app_start is called in Round 2. 00:06:57.450 Shutdown signal received, stop current app iteration 00:06:57.450 Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 reinitialization... 00:06:57.450 spdk_app_start is called in Round 3. 00:06:57.450 Shutdown signal received, stop current app iteration 00:06:57.450 17:53:46 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:57.450 17:53:46 -- event/event.sh@42 -- # return 0 00:06:57.450 00:06:57.450 real 0m21.662s 00:06:57.450 user 0m49.503s 00:06:57.450 sys 0m4.523s 00:06:57.450 17:53:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:57.450 17:53:46 -- common/autotest_common.sh@10 -- # set +x 00:06:57.450 ************************************ 00:06:57.450 END TEST app_repeat 00:06:57.450 ************************************ 00:06:57.450 17:53:46 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:57.450 17:53:46 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:57.450 17:53:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:57.450 17:53:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.450 17:53:46 -- common/autotest_common.sh@10 -- # set +x 00:06:57.450 ************************************ 00:06:57.450 START TEST cpu_locks 00:06:57.450 ************************************ 00:06:57.450 17:53:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:57.708 * Looking for test storage... 00:06:57.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:57.708 17:53:46 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:57.708 17:53:46 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:57.708 17:53:46 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:57.708 17:53:46 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:57.708 17:53:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:57.708 17:53:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.708 17:53:46 -- common/autotest_common.sh@10 -- # set +x 00:06:57.708 ************************************ 00:06:57.708 START TEST default_locks 00:06:57.708 ************************************ 00:06:57.708 17:53:46 -- common/autotest_common.sh@1111 -- # default_locks 00:06:57.708 17:53:46 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3205192 00:06:57.708 17:53:46 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.708 17:53:46 -- event/cpu_locks.sh@47 -- # waitforlisten 3205192 00:06:57.708 17:53:46 -- common/autotest_common.sh@817 -- # '[' -z 3205192 ']' 00:06:57.708 17:53:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.708 17:53:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:57.708 17:53:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.708 17:53:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:57.708 17:53:46 -- common/autotest_common.sh@10 -- # set +x 00:06:57.708 [2024-04-15 17:53:46.620101] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:06:57.708 [2024-04-15 17:53:46.620192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205192 ] 00:06:57.708 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.967 [2024-04-15 17:53:46.689141] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.967 [2024-04-15 17:53:46.783618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.226 17:53:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:58.226 17:53:47 -- common/autotest_common.sh@850 -- # return 0 00:06:58.226 17:53:47 -- event/cpu_locks.sh@49 -- # locks_exist 3205192 00:06:58.226 17:53:47 -- event/cpu_locks.sh@22 -- # lslocks -p 3205192 00:06:58.226 17:53:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.485 lslocks: write error 00:06:58.485 17:53:47 -- event/cpu_locks.sh@50 -- # killprocess 3205192 00:06:58.485 17:53:47 -- common/autotest_common.sh@936 -- # '[' -z 3205192 ']' 00:06:58.485 17:53:47 -- common/autotest_common.sh@940 -- # kill -0 3205192 00:06:58.485 17:53:47 -- common/autotest_common.sh@941 -- # uname 00:06:58.485 17:53:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:58.485 17:53:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3205192 00:06:58.485 17:53:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:58.743 17:53:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:58.743 17:53:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3205192' 00:06:58.743 killing process with pid 3205192 00:06:58.743 17:53:47 -- common/autotest_common.sh@955 -- # kill 3205192 00:06:58.743 17:53:47 -- common/autotest_common.sh@960 -- # wait 3205192 00:06:59.003 17:53:47 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3205192 00:06:59.003 17:53:47 -- common/autotest_common.sh@638 -- # local es=0 00:06:59.003 17:53:47 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3205192 00:06:59.003 17:53:47 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:59.003 17:53:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:59.003 17:53:47 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:59.003 17:53:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:59.003 17:53:47 -- common/autotest_common.sh@641 -- # waitforlisten 3205192 00:06:59.003 17:53:47 -- common/autotest_common.sh@817 -- # '[' -z 3205192 ']' 00:06:59.003 17:53:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.003 17:53:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:59.003 17:53:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.003 17:53:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:59.003 17:53:47 -- common/autotest_common.sh@10 -- # set +x 00:06:59.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3205192) - No such process 00:06:59.003 ERROR: process (pid: 3205192) is no longer running 00:06:59.003 17:53:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:59.003 17:53:47 -- common/autotest_common.sh@850 -- # return 1 00:06:59.003 17:53:47 -- common/autotest_common.sh@641 -- # es=1 00:06:59.003 17:53:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:59.003 17:53:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:59.003 17:53:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:59.003 17:53:47 -- event/cpu_locks.sh@54 -- # no_locks 00:06:59.003 17:53:47 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:59.003 17:53:47 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:59.003 17:53:47 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:59.003 00:06:59.003 real 0m1.299s 00:06:59.003 user 0m1.236s 00:06:59.003 sys 0m0.578s 00:06:59.003 17:53:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:59.003 17:53:47 -- common/autotest_common.sh@10 -- # set +x 00:06:59.003 ************************************ 00:06:59.003 END TEST default_locks 00:06:59.003 ************************************ 00:06:59.003 17:53:47 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:59.003 17:53:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:59.003 17:53:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.003 17:53:47 -- common/autotest_common.sh@10 -- # set +x 00:06:59.262 ************************************ 00:06:59.262 START TEST default_locks_via_rpc 00:06:59.262 ************************************ 00:06:59.262 17:53:48 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:06:59.262 17:53:48 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3205360 00:06:59.262 17:53:48 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.262 17:53:48 -- event/cpu_locks.sh@63 -- # waitforlisten 3205360 00:06:59.262 17:53:48 -- common/autotest_common.sh@817 -- # '[' -z 3205360 ']' 00:06:59.262 17:53:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.262 17:53:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:59.262 17:53:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.262 17:53:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:59.262 17:53:48 -- common/autotest_common.sh@10 -- # set +x 00:06:59.262 [2024-04-15 17:53:48.069981] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:06:59.262 [2024-04-15 17:53:48.070085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205360 ] 00:06:59.262 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.262 [2024-04-15 17:53:48.139142] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.522 [2024-04-15 17:53:48.235683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.780 17:53:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:59.780 17:53:48 -- common/autotest_common.sh@850 -- # return 0 00:06:59.781 17:53:48 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:59.781 17:53:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:59.781 17:53:48 -- common/autotest_common.sh@10 -- # set +x 00:06:59.781 17:53:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:59.781 17:53:48 -- event/cpu_locks.sh@67 -- # no_locks 00:06:59.781 17:53:48 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:59.781 17:53:48 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:59.781 17:53:48 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:59.781 17:53:48 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:59.781 17:53:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:59.781 17:53:48 -- common/autotest_common.sh@10 -- # set +x 00:06:59.781 17:53:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:59.781 17:53:48 -- event/cpu_locks.sh@71 -- # locks_exist 3205360 00:06:59.781 17:53:48 -- event/cpu_locks.sh@22 -- # lslocks -p 3205360 00:06:59.781 17:53:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.040 17:53:48 -- event/cpu_locks.sh@73 -- # killprocess 3205360 00:07:00.040 17:53:48 -- common/autotest_common.sh@936 -- # '[' -z 3205360 ']' 00:07:00.040 17:53:48 -- common/autotest_common.sh@940 -- # kill -0 3205360 00:07:00.040 17:53:48 -- common/autotest_common.sh@941 -- # uname 00:07:00.040 17:53:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:00.040 17:53:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3205360 00:07:00.040 17:53:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:00.040 17:53:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:00.040 17:53:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3205360' 00:07:00.040 killing process with pid 3205360 00:07:00.040 17:53:48 -- common/autotest_common.sh@955 -- # kill 3205360 00:07:00.040 17:53:48 -- common/autotest_common.sh@960 -- # wait 3205360 00:07:00.610 00:07:00.610 real 0m1.296s 00:07:00.610 user 0m1.233s 00:07:00.610 sys 0m0.577s 00:07:00.610 17:53:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:00.610 17:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:00.610 ************************************ 00:07:00.610 END TEST default_locks_via_rpc 00:07:00.610 ************************************ 00:07:00.610 17:53:49 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:00.610 17:53:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:00.610 17:53:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.610 17:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:00.610 ************************************ 00:07:00.610 START TEST non_locking_app_on_locked_coremask 00:07:00.610 ************************************ 00:07:00.610 17:53:49 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:07:00.610 17:53:49 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3205536 00:07:00.610 17:53:49 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:00.610 17:53:49 -- event/cpu_locks.sh@81 -- # waitforlisten 3205536 /var/tmp/spdk.sock 00:07:00.610 17:53:49 -- common/autotest_common.sh@817 -- # '[' -z 3205536 ']' 00:07:00.610 17:53:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.610 17:53:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:00.610 17:53:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.610 17:53:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:00.610 17:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:00.610 [2024-04-15 17:53:49.493523] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:00.610 [2024-04-15 17:53:49.493619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205536 ] 00:07:00.610 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.610 [2024-04-15 17:53:49.564174] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.870 [2024-04-15 17:53:49.658905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.129 17:53:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:01.129 17:53:49 -- common/autotest_common.sh@850 -- # return 0 00:07:01.129 17:53:49 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3205586 00:07:01.129 17:53:49 -- event/cpu_locks.sh@85 -- # waitforlisten 3205586 /var/tmp/spdk2.sock 00:07:01.129 17:53:49 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:01.129 17:53:49 -- common/autotest_common.sh@817 -- # '[' -z 3205586 ']' 00:07:01.129 17:53:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.129 17:53:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:01.129 17:53:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.129 17:53:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:01.129 17:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:01.129 [2024-04-15 17:53:49.984283] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:01.129 [2024-04-15 17:53:49.984389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205586 ] 00:07:01.129 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.387 [2024-04-15 17:53:50.094403] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:01.387 [2024-04-15 17:53:50.094453] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.387 [2024-04-15 17:53:50.278314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.324 17:53:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:02.324 17:53:51 -- common/autotest_common.sh@850 -- # return 0 00:07:02.324 17:53:51 -- event/cpu_locks.sh@87 -- # locks_exist 3205536 00:07:02.324 17:53:51 -- event/cpu_locks.sh@22 -- # lslocks -p 3205536 00:07:02.324 17:53:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.890 lslocks: write error 00:07:02.890 17:53:51 -- event/cpu_locks.sh@89 -- # killprocess 3205536 00:07:02.890 17:53:51 -- common/autotest_common.sh@936 -- # '[' -z 3205536 ']' 00:07:02.890 17:53:51 -- common/autotest_common.sh@940 -- # kill -0 3205536 00:07:02.890 17:53:51 -- common/autotest_common.sh@941 -- # uname 00:07:02.890 17:53:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.890 17:53:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3205536 00:07:02.890 17:53:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.890 17:53:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.890 17:53:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3205536' 00:07:02.890 killing process with pid 3205536 00:07:02.890 17:53:51 -- common/autotest_common.sh@955 -- # kill 3205536 00:07:02.890 17:53:51 -- common/autotest_common.sh@960 -- # wait 3205536 00:07:03.828 17:53:52 -- event/cpu_locks.sh@90 -- # killprocess 3205586 00:07:03.828 17:53:52 -- common/autotest_common.sh@936 -- # '[' -z 3205586 ']' 00:07:03.828 17:53:52 -- common/autotest_common.sh@940 -- # kill -0 3205586 00:07:03.828 17:53:52 -- common/autotest_common.sh@941 -- # uname 00:07:03.828 17:53:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:03.828 17:53:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3205586 00:07:03.829 17:53:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:03.829 17:53:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:03.829 17:53:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3205586' 00:07:03.829 killing process with pid 3205586 00:07:03.829 17:53:52 -- common/autotest_common.sh@955 -- # kill 3205586 00:07:03.829 17:53:52 -- common/autotest_common.sh@960 -- # wait 3205586 00:07:04.087 00:07:04.087 real 0m3.574s 00:07:04.087 user 0m4.163s 00:07:04.087 sys 0m1.219s 00:07:04.087 17:53:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:04.087 17:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:04.087 ************************************ 00:07:04.087 END TEST non_locking_app_on_locked_coremask 00:07:04.087 ************************************ 00:07:04.087 17:53:53 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:04.087 17:53:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:04.087 17:53:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.088 17:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:04.381 ************************************ 00:07:04.381 START TEST locking_app_on_unlocked_coremask 00:07:04.381 ************************************ 00:07:04.381 17:53:53 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:07:04.381 17:53:53 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3205976 00:07:04.381 17:53:53 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:04.381 17:53:53 -- event/cpu_locks.sh@99 -- # waitforlisten 3205976 /var/tmp/spdk.sock 00:07:04.381 17:53:53 -- common/autotest_common.sh@817 -- # '[' -z 3205976 ']' 00:07:04.381 17:53:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.381 17:53:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:04.381 17:53:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.381 17:53:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:04.381 17:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:04.381 [2024-04-15 17:53:53.284825] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:04.381 [2024-04-15 17:53:53.285002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205976 ] 00:07:04.641 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.641 [2024-04-15 17:53:53.382236] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:04.641 [2024-04-15 17:53:53.382277] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.641 [2024-04-15 17:53:53.477361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.900 17:53:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:04.900 17:53:53 -- common/autotest_common.sh@850 -- # return 0 00:07:04.900 17:53:53 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3206113 00:07:04.900 17:53:53 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:04.900 17:53:53 -- event/cpu_locks.sh@103 -- # waitforlisten 3206113 /var/tmp/spdk2.sock 00:07:04.900 17:53:53 -- common/autotest_common.sh@817 -- # '[' -z 3206113 ']' 00:07:04.900 17:53:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.900 17:53:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:04.900 17:53:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.900 17:53:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:04.900 17:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:04.900 [2024-04-15 17:53:53.807001] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:04.900 [2024-04-15 17:53:53.807121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206113 ] 00:07:04.900 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.160 [2024-04-15 17:53:53.915276] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.160 [2024-04-15 17:53:54.099108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.098 17:53:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:06.098 17:53:54 -- common/autotest_common.sh@850 -- # return 0 00:07:06.098 17:53:54 -- event/cpu_locks.sh@105 -- # locks_exist 3206113 00:07:06.098 17:53:54 -- event/cpu_locks.sh@22 -- # lslocks -p 3206113 00:07:06.098 17:53:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.032 lslocks: write error 00:07:07.032 17:53:55 -- event/cpu_locks.sh@107 -- # killprocess 3205976 00:07:07.032 17:53:55 -- common/autotest_common.sh@936 -- # '[' -z 3205976 ']' 00:07:07.032 17:53:55 -- common/autotest_common.sh@940 -- # kill -0 3205976 00:07:07.032 17:53:55 -- common/autotest_common.sh@941 -- # uname 00:07:07.032 17:53:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:07.032 17:53:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3205976 00:07:07.032 17:53:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:07.032 17:53:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:07.032 17:53:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3205976' 00:07:07.032 killing process with pid 3205976 00:07:07.032 17:53:55 -- common/autotest_common.sh@955 -- # kill 3205976 00:07:07.032 17:53:55 -- common/autotest_common.sh@960 -- # wait 3205976 00:07:07.600 17:53:56 -- event/cpu_locks.sh@108 -- # killprocess 3206113 00:07:07.600 17:53:56 -- common/autotest_common.sh@936 -- # '[' -z 3206113 ']' 00:07:07.600 17:53:56 -- common/autotest_common.sh@940 -- # kill -0 3206113 00:07:07.600 17:53:56 -- common/autotest_common.sh@941 -- # uname 00:07:07.600 17:53:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:07.600 17:53:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3206113 00:07:07.600 17:53:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:07.600 17:53:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:07.600 17:53:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3206113' 00:07:07.600 killing process with pid 3206113 00:07:07.600 17:53:56 -- common/autotest_common.sh@955 -- # kill 3206113 00:07:07.600 17:53:56 -- common/autotest_common.sh@960 -- # wait 3206113 00:07:08.166 00:07:08.166 real 0m3.781s 00:07:08.166 user 0m4.238s 00:07:08.166 sys 0m1.315s 00:07:08.166 17:53:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:08.166 17:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:08.166 ************************************ 00:07:08.166 END TEST locking_app_on_unlocked_coremask 00:07:08.166 ************************************ 00:07:08.166 17:53:56 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:08.166 17:53:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:08.166 17:53:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.166 17:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:08.166 ************************************ 00:07:08.166 START TEST locking_app_on_locked_coremask 00:07:08.166 ************************************ 00:07:08.166 17:53:57 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:07:08.166 17:53:57 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3206549 00:07:08.166 17:53:57 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.166 17:53:57 -- event/cpu_locks.sh@116 -- # waitforlisten 3206549 /var/tmp/spdk.sock 00:07:08.166 17:53:57 -- common/autotest_common.sh@817 -- # '[' -z 3206549 ']' 00:07:08.166 17:53:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.166 17:53:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:08.166 17:53:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.166 17:53:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:08.166 17:53:57 -- common/autotest_common.sh@10 -- # set +x 00:07:08.425 [2024-04-15 17:53:57.123839] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:08.425 [2024-04-15 17:53:57.123922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206549 ] 00:07:08.425 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.425 [2024-04-15 17:53:57.192941] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.425 [2024-04-15 17:53:57.289178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.684 17:53:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:08.684 17:53:57 -- common/autotest_common.sh@850 -- # return 0 00:07:08.684 17:53:57 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3206561 00:07:08.684 17:53:57 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3206561 /var/tmp/spdk2.sock 00:07:08.684 17:53:57 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:08.684 17:53:57 -- common/autotest_common.sh@638 -- # local es=0 00:07:08.684 17:53:57 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3206561 /var/tmp/spdk2.sock 00:07:08.684 17:53:57 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:08.684 17:53:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:08.684 17:53:57 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:08.684 17:53:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:08.684 17:53:57 -- common/autotest_common.sh@641 -- # waitforlisten 3206561 /var/tmp/spdk2.sock 00:07:08.684 17:53:57 -- common/autotest_common.sh@817 -- # '[' -z 3206561 ']' 00:07:08.684 17:53:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.684 17:53:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:08.684 17:53:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.684 17:53:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:08.684 17:53:57 -- common/autotest_common.sh@10 -- # set +x 00:07:08.942 [2024-04-15 17:53:57.656303] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:08.942 [2024-04-15 17:53:57.656479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206561 ] 00:07:08.942 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.942 [2024-04-15 17:53:57.798680] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3206549 has claimed it. 00:07:08.942 [2024-04-15 17:53:57.798753] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:09.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3206561) - No such process 00:07:09.879 ERROR: process (pid: 3206561) is no longer running 00:07:09.879 17:53:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:09.879 17:53:58 -- common/autotest_common.sh@850 -- # return 1 00:07:09.879 17:53:58 -- common/autotest_common.sh@641 -- # es=1 00:07:09.879 17:53:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:09.879 17:53:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:09.879 17:53:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:09.879 17:53:58 -- event/cpu_locks.sh@122 -- # locks_exist 3206549 00:07:09.879 17:53:58 -- event/cpu_locks.sh@22 -- # lslocks -p 3206549 00:07:09.879 17:53:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.448 lslocks: write error 00:07:10.448 17:53:59 -- event/cpu_locks.sh@124 -- # killprocess 3206549 00:07:10.448 17:53:59 -- common/autotest_common.sh@936 -- # '[' -z 3206549 ']' 00:07:10.448 17:53:59 -- common/autotest_common.sh@940 -- # kill -0 3206549 00:07:10.448 17:53:59 -- common/autotest_common.sh@941 -- # uname 00:07:10.448 17:53:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:10.448 17:53:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3206549 00:07:10.448 17:53:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:10.448 17:53:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:10.448 17:53:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3206549' 00:07:10.448 killing process with pid 3206549 00:07:10.448 17:53:59 -- common/autotest_common.sh@955 -- # kill 3206549 00:07:10.448 17:53:59 -- common/autotest_common.sh@960 -- # wait 3206549 00:07:11.018 00:07:11.018 real 0m2.727s 00:07:11.018 user 0m3.300s 00:07:11.018 sys 0m0.973s 00:07:11.018 17:53:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:11.018 17:53:59 -- common/autotest_common.sh@10 -- # set +x 00:07:11.018 ************************************ 00:07:11.018 END TEST locking_app_on_locked_coremask 00:07:11.018 ************************************ 00:07:11.018 17:53:59 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:11.018 17:53:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:11.018 17:53:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.018 17:53:59 -- common/autotest_common.sh@10 -- # set +x 00:07:11.018 ************************************ 00:07:11.018 START TEST locking_overlapped_coremask 00:07:11.018 ************************************ 00:07:11.018 17:53:59 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:07:11.018 17:53:59 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3206861 00:07:11.018 17:53:59 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:11.018 17:53:59 -- event/cpu_locks.sh@133 -- # waitforlisten 3206861 /var/tmp/spdk.sock 00:07:11.018 17:53:59 -- common/autotest_common.sh@817 -- # '[' -z 3206861 ']' 00:07:11.018 17:53:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.018 17:53:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:11.018 17:53:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.018 17:53:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:11.018 17:53:59 -- common/autotest_common.sh@10 -- # set +x 00:07:11.277 [2024-04-15 17:54:00.043771] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:11.277 [2024-04-15 17:54:00.043880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206861 ] 00:07:11.277 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.277 [2024-04-15 17:54:00.131471] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.277 [2024-04-15 17:54:00.230564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.277 [2024-04-15 17:54:00.230590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.277 [2024-04-15 17:54:00.230593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.536 17:54:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:11.536 17:54:00 -- common/autotest_common.sh@850 -- # return 0 00:07:11.794 17:54:00 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3207017 00:07:11.794 17:54:00 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:11.794 17:54:00 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3207017 /var/tmp/spdk2.sock 00:07:11.794 17:54:00 -- common/autotest_common.sh@638 -- # local es=0 00:07:11.794 17:54:00 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3207017 /var/tmp/spdk2.sock 00:07:11.794 17:54:00 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:11.794 17:54:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:11.794 17:54:00 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:11.794 17:54:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:11.794 17:54:00 -- common/autotest_common.sh@641 -- # waitforlisten 3207017 /var/tmp/spdk2.sock 00:07:11.794 17:54:00 -- common/autotest_common.sh@817 -- # '[' -z 3207017 ']' 00:07:11.794 17:54:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.795 17:54:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:11.795 17:54:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.795 17:54:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:11.795 17:54:00 -- common/autotest_common.sh@10 -- # set +x 00:07:11.795 [2024-04-15 17:54:00.547385] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:11.795 [2024-04-15 17:54:00.547498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207017 ] 00:07:11.795 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.795 [2024-04-15 17:54:00.644594] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3206861 has claimed it. 00:07:11.795 [2024-04-15 17:54:00.644648] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:12.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3207017) - No such process 00:07:12.365 ERROR: process (pid: 3207017) is no longer running 00:07:12.365 17:54:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:12.365 17:54:01 -- common/autotest_common.sh@850 -- # return 1 00:07:12.365 17:54:01 -- common/autotest_common.sh@641 -- # es=1 00:07:12.365 17:54:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:12.365 17:54:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:12.365 17:54:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:12.365 17:54:01 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:12.365 17:54:01 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:12.365 17:54:01 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:12.365 17:54:01 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:12.365 17:54:01 -- event/cpu_locks.sh@141 -- # killprocess 3206861 00:07:12.365 17:54:01 -- common/autotest_common.sh@936 -- # '[' -z 3206861 ']' 00:07:12.365 17:54:01 -- common/autotest_common.sh@940 -- # kill -0 3206861 00:07:12.365 17:54:01 -- common/autotest_common.sh@941 -- # uname 00:07:12.365 17:54:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:12.365 17:54:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3206861 00:07:12.624 17:54:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:12.624 17:54:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:12.624 17:54:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3206861' 00:07:12.624 killing process with pid 3206861 00:07:12.624 17:54:01 -- common/autotest_common.sh@955 -- # kill 3206861 00:07:12.624 17:54:01 -- common/autotest_common.sh@960 -- # wait 3206861 00:07:12.884 00:07:12.884 real 0m1.753s 00:07:12.884 user 0m4.676s 00:07:12.884 sys 0m0.549s 00:07:12.884 17:54:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:12.884 17:54:01 -- common/autotest_common.sh@10 -- # set +x 00:07:12.884 ************************************ 00:07:12.884 END TEST locking_overlapped_coremask 00:07:12.884 ************************************ 00:07:12.884 17:54:01 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:12.884 17:54:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:12.884 17:54:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.884 17:54:01 -- common/autotest_common.sh@10 -- # set +x 00:07:13.144 ************************************ 00:07:13.144 START TEST locking_overlapped_coremask_via_rpc 00:07:13.144 ************************************ 00:07:13.144 17:54:01 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:07:13.144 17:54:01 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3207255 00:07:13.144 17:54:01 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:13.144 17:54:01 -- event/cpu_locks.sh@149 -- # waitforlisten 3207255 /var/tmp/spdk.sock 00:07:13.144 17:54:01 -- common/autotest_common.sh@817 -- # '[' -z 3207255 ']' 00:07:13.144 17:54:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.144 17:54:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:13.144 17:54:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.144 17:54:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:13.144 17:54:01 -- common/autotest_common.sh@10 -- # set +x 00:07:13.144 [2024-04-15 17:54:01.940453] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:13.144 [2024-04-15 17:54:01.940555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207255 ] 00:07:13.144 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.144 [2024-04-15 17:54:02.016285] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.144 [2024-04-15 17:54:02.016339] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.404 [2024-04-15 17:54:02.114931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.404 [2024-04-15 17:54:02.114984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.404 [2024-04-15 17:54:02.114987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.662 17:54:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:13.662 17:54:02 -- common/autotest_common.sh@850 -- # return 0 00:07:13.662 17:54:02 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3207277 00:07:13.663 17:54:02 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:13.663 17:54:02 -- event/cpu_locks.sh@153 -- # waitforlisten 3207277 /var/tmp/spdk2.sock 00:07:13.663 17:54:02 -- common/autotest_common.sh@817 -- # '[' -z 3207277 ']' 00:07:13.663 17:54:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.663 17:54:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:13.663 17:54:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.663 17:54:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:13.663 17:54:02 -- common/autotest_common.sh@10 -- # set +x 00:07:13.663 [2024-04-15 17:54:02.482670] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:13.663 [2024-04-15 17:54:02.482764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207277 ] 00:07:13.663 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.663 [2024-04-15 17:54:02.600620] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.663 [2024-04-15 17:54:02.600672] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.922 [2024-04-15 17:54:02.791566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.922 [2024-04-15 17:54:02.791621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:13.922 [2024-04-15 17:54:02.791623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.858 17:54:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:14.858 17:54:03 -- common/autotest_common.sh@850 -- # return 0 00:07:14.858 17:54:03 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:14.858 17:54:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.858 17:54:03 -- common/autotest_common.sh@10 -- # set +x 00:07:14.858 17:54:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.858 17:54:03 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.858 17:54:03 -- common/autotest_common.sh@638 -- # local es=0 00:07:14.858 17:54:03 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.858 17:54:03 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:07:14.858 17:54:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:14.858 17:54:03 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:07:14.858 17:54:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:14.858 17:54:03 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.858 17:54:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.858 17:54:03 -- common/autotest_common.sh@10 -- # set +x 00:07:14.858 [2024-04-15 17:54:03.585165] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3207255 has claimed it. 00:07:14.858 request: 00:07:14.858 { 00:07:14.858 "method": "framework_enable_cpumask_locks", 00:07:14.858 "req_id": 1 00:07:14.858 } 00:07:14.858 Got JSON-RPC error response 00:07:14.858 response: 00:07:14.858 { 00:07:14.858 "code": -32603, 00:07:14.858 "message": "Failed to claim CPU core: 2" 00:07:14.858 } 00:07:14.858 17:54:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:07:14.858 17:54:03 -- common/autotest_common.sh@641 -- # es=1 00:07:14.859 17:54:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:14.859 17:54:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:14.859 17:54:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:14.859 17:54:03 -- event/cpu_locks.sh@158 -- # waitforlisten 3207255 /var/tmp/spdk.sock 00:07:14.859 17:54:03 -- common/autotest_common.sh@817 -- # '[' -z 3207255 ']' 00:07:14.859 17:54:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.859 17:54:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:14.859 17:54:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.859 17:54:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:14.859 17:54:03 -- common/autotest_common.sh@10 -- # set +x 00:07:15.118 17:54:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:15.118 17:54:03 -- common/autotest_common.sh@850 -- # return 0 00:07:15.118 17:54:03 -- event/cpu_locks.sh@159 -- # waitforlisten 3207277 /var/tmp/spdk2.sock 00:07:15.118 17:54:03 -- common/autotest_common.sh@817 -- # '[' -z 3207277 ']' 00:07:15.118 17:54:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.118 17:54:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:15.118 17:54:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.118 17:54:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:15.118 17:54:03 -- common/autotest_common.sh@10 -- # set +x 00:07:15.687 17:54:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:15.687 17:54:04 -- common/autotest_common.sh@850 -- # return 0 00:07:15.687 17:54:04 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:15.687 17:54:04 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:15.687 17:54:04 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:15.687 17:54:04 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:15.687 00:07:15.687 real 0m2.571s 00:07:15.687 user 0m1.544s 00:07:15.687 sys 0m0.239s 00:07:15.687 17:54:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:15.687 17:54:04 -- common/autotest_common.sh@10 -- # set +x 00:07:15.687 ************************************ 00:07:15.687 END TEST locking_overlapped_coremask_via_rpc 00:07:15.687 ************************************ 00:07:15.687 17:54:04 -- event/cpu_locks.sh@174 -- # cleanup 00:07:15.687 17:54:04 -- event/cpu_locks.sh@15 -- # [[ -z 3207255 ]] 00:07:15.687 17:54:04 -- event/cpu_locks.sh@15 -- # killprocess 3207255 00:07:15.687 17:54:04 -- common/autotest_common.sh@936 -- # '[' -z 3207255 ']' 00:07:15.687 17:54:04 -- common/autotest_common.sh@940 -- # kill -0 3207255 00:07:15.687 17:54:04 -- common/autotest_common.sh@941 -- # uname 00:07:15.687 17:54:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:15.687 17:54:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3207255 00:07:15.687 17:54:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:15.687 17:54:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:15.687 17:54:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3207255' 00:07:15.687 killing process with pid 3207255 00:07:15.687 17:54:04 -- common/autotest_common.sh@955 -- # kill 3207255 00:07:15.687 17:54:04 -- common/autotest_common.sh@960 -- # wait 3207255 00:07:16.255 17:54:04 -- event/cpu_locks.sh@16 -- # [[ -z 3207277 ]] 00:07:16.255 17:54:04 -- event/cpu_locks.sh@16 -- # killprocess 3207277 00:07:16.255 17:54:04 -- common/autotest_common.sh@936 -- # '[' -z 3207277 ']' 00:07:16.255 17:54:04 -- common/autotest_common.sh@940 -- # kill -0 3207277 00:07:16.255 17:54:04 -- common/autotest_common.sh@941 -- # uname 00:07:16.255 17:54:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:16.255 17:54:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3207277 00:07:16.255 17:54:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:16.255 17:54:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:16.255 17:54:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3207277' 00:07:16.255 killing process with pid 3207277 00:07:16.255 17:54:04 -- common/autotest_common.sh@955 -- # kill 3207277 00:07:16.255 17:54:04 -- common/autotest_common.sh@960 -- # wait 3207277 00:07:16.514 17:54:05 -- event/cpu_locks.sh@18 -- # rm -f 00:07:16.514 17:54:05 -- event/cpu_locks.sh@1 -- # cleanup 00:07:16.514 17:54:05 -- event/cpu_locks.sh@15 -- # [[ -z 3207255 ]] 00:07:16.514 17:54:05 -- event/cpu_locks.sh@15 -- # killprocess 3207255 00:07:16.514 17:54:05 -- common/autotest_common.sh@936 -- # '[' -z 3207255 ']' 00:07:16.514 17:54:05 -- common/autotest_common.sh@940 -- # kill -0 3207255 00:07:16.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3207255) - No such process 00:07:16.514 17:54:05 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3207255 is not found' 00:07:16.514 Process with pid 3207255 is not found 00:07:16.514 17:54:05 -- event/cpu_locks.sh@16 -- # [[ -z 3207277 ]] 00:07:16.514 17:54:05 -- event/cpu_locks.sh@16 -- # killprocess 3207277 00:07:16.514 17:54:05 -- common/autotest_common.sh@936 -- # '[' -z 3207277 ']' 00:07:16.514 17:54:05 -- common/autotest_common.sh@940 -- # kill -0 3207277 00:07:16.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3207277) - No such process 00:07:16.514 17:54:05 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3207277 is not found' 00:07:16.514 Process with pid 3207277 is not found 00:07:16.514 17:54:05 -- event/cpu_locks.sh@18 -- # rm -f 00:07:16.514 00:07:16.514 real 0m18.979s 00:07:16.514 user 0m34.167s 00:07:16.514 sys 0m6.801s 00:07:16.514 17:54:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:16.514 17:54:05 -- common/autotest_common.sh@10 -- # set +x 00:07:16.514 ************************************ 00:07:16.514 END TEST cpu_locks 00:07:16.514 ************************************ 00:07:16.514 00:07:16.514 real 0m49.872s 00:07:16.514 user 1m38.206s 00:07:16.514 sys 0m12.629s 00:07:16.514 17:54:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:16.514 17:54:05 -- common/autotest_common.sh@10 -- # set +x 00:07:16.514 ************************************ 00:07:16.514 END TEST event 00:07:16.514 ************************************ 00:07:16.514 17:54:05 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:16.514 17:54:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:16.514 17:54:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.514 17:54:05 -- common/autotest_common.sh@10 -- # set +x 00:07:16.773 ************************************ 00:07:16.773 START TEST thread 00:07:16.773 ************************************ 00:07:16.773 17:54:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:16.773 * Looking for test storage... 00:07:16.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:16.773 17:54:05 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:16.773 17:54:05 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:16.773 17:54:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.773 17:54:05 -- common/autotest_common.sh@10 -- # set +x 00:07:16.773 ************************************ 00:07:16.773 START TEST thread_poller_perf 00:07:16.773 ************************************ 00:07:16.774 17:54:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:16.774 [2024-04-15 17:54:05.691766] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:16.774 [2024-04-15 17:54:05.691843] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207799 ] 00:07:17.032 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.032 [2024-04-15 17:54:05.763511] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.032 [2024-04-15 17:54:05.858622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.032 [2024-04-15 17:54:05.858727] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:17.032 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:18.410 ====================================== 00:07:18.410 busy:2711148324 (cyc) 00:07:18.410 total_run_count: 290000 00:07:18.410 tsc_hz: 2700000000 (cyc) 00:07:18.410 ====================================== 00:07:18.410 poller_cost: 9348 (cyc), 3462 (nsec) 00:07:18.410 00:07:18.410 real 0m1.272s 00:07:18.410 user 0m1.164s 00:07:18.410 sys 0m0.102s 00:07:18.410 17:54:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:18.410 17:54:06 -- common/autotest_common.sh@10 -- # set +x 00:07:18.410 ************************************ 00:07:18.410 END TEST thread_poller_perf 00:07:18.411 ************************************ 00:07:18.411 17:54:06 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:18.411 17:54:06 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:18.411 17:54:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.411 17:54:06 -- common/autotest_common.sh@10 -- # set +x 00:07:18.411 ************************************ 00:07:18.411 START TEST thread_poller_perf 00:07:18.411 ************************************ 00:07:18.411 17:54:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:18.411 [2024-04-15 17:54:07.095451] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:18.411 [2024-04-15 17:54:07.095592] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207959 ] 00:07:18.411 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.411 [2024-04-15 17:54:07.185055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.411 [2024-04-15 17:54:07.278707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.411 [2024-04-15 17:54:07.278815] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:18.411 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:19.788 ====================================== 00:07:19.788 busy:2702625252 (cyc) 00:07:19.788 total_run_count: 3822000 00:07:19.788 tsc_hz: 2700000000 (cyc) 00:07:19.788 ====================================== 00:07:19.788 poller_cost: 707 (cyc), 261 (nsec) 00:07:19.788 00:07:19.788 real 0m1.285s 00:07:19.788 user 0m1.173s 00:07:19.788 sys 0m0.105s 00:07:19.788 17:54:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:19.788 17:54:08 -- common/autotest_common.sh@10 -- # set +x 00:07:19.788 ************************************ 00:07:19.788 END TEST thread_poller_perf 00:07:19.788 ************************************ 00:07:19.788 17:54:08 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:19.788 00:07:19.788 real 0m2.862s 00:07:19.788 user 0m2.440s 00:07:19.788 sys 0m0.404s 00:07:19.788 17:54:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:19.788 17:54:08 -- common/autotest_common.sh@10 -- # set +x 00:07:19.788 ************************************ 00:07:19.788 END TEST thread 00:07:19.788 ************************************ 00:07:19.788 17:54:08 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:19.789 17:54:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:19.789 17:54:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.789 17:54:08 -- common/autotest_common.sh@10 -- # set +x 00:07:19.789 ************************************ 00:07:19.789 START TEST accel 00:07:19.789 ************************************ 00:07:19.789 17:54:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:19.789 * Looking for test storage... 00:07:19.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:19.789 17:54:08 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:19.789 17:54:08 -- accel/accel.sh@82 -- # get_expected_opcs 00:07:19.789 17:54:08 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:19.789 17:54:08 -- accel/accel.sh@62 -- # spdk_tgt_pid=3208523 00:07:19.789 17:54:08 -- accel/accel.sh@63 -- # waitforlisten 3208523 00:07:19.789 17:54:08 -- common/autotest_common.sh@817 -- # '[' -z 3208523 ']' 00:07:19.789 17:54:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.789 17:54:08 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:19.789 17:54:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:19.789 17:54:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.789 17:54:08 -- accel/accel.sh@61 -- # build_accel_config 00:07:19.789 17:54:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:19.789 17:54:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.789 17:54:08 -- common/autotest_common.sh@10 -- # set +x 00:07:19.789 17:54:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.789 17:54:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.789 17:54:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.789 17:54:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.789 17:54:08 -- accel/accel.sh@40 -- # local IFS=, 00:07:19.789 17:54:08 -- accel/accel.sh@41 -- # jq -r . 00:07:19.789 [2024-04-15 17:54:08.616893] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:19.789 [2024-04-15 17:54:08.616986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208523 ] 00:07:19.789 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.789 [2024-04-15 17:54:08.687643] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.047 [2024-04-15 17:54:08.780009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.332 17:54:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:20.332 17:54:09 -- common/autotest_common.sh@850 -- # return 0 00:07:20.332 17:54:09 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:20.332 17:54:09 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:20.332 17:54:09 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:20.332 17:54:09 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:20.332 17:54:09 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:20.332 17:54:09 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:20.332 17:54:09 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:20.332 17:54:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.332 17:54:09 -- common/autotest_common.sh@10 -- # set +x 00:07:20.332 17:54:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.332 17:54:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # IFS== 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # read -r opc module 00:07:20.332 17:54:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.332 17:54:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # IFS== 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # read -r opc module 00:07:20.332 17:54:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.332 17:54:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # IFS== 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # read -r opc module 00:07:20.332 17:54:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.332 17:54:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # IFS== 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # read -r opc module 00:07:20.332 17:54:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.332 17:54:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # IFS== 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # read -r opc module 00:07:20.332 17:54:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.332 17:54:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # IFS== 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # read -r opc module 00:07:20.332 17:54:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.332 17:54:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # IFS== 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # read -r opc module 00:07:20.332 17:54:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.332 17:54:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # IFS== 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # read -r opc module 00:07:20.332 17:54:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.332 17:54:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # IFS== 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # read -r opc module 00:07:20.332 17:54:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.332 17:54:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # IFS== 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # read -r opc module 00:07:20.332 17:54:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.332 17:54:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # IFS== 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # read -r opc module 00:07:20.332 17:54:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.332 17:54:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # IFS== 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # read -r opc module 00:07:20.332 17:54:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.332 17:54:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # IFS== 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # read -r opc module 00:07:20.332 17:54:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.332 17:54:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # IFS== 00:07:20.332 17:54:09 -- accel/accel.sh@72 -- # read -r opc module 00:07:20.332 17:54:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.332 17:54:09 -- accel/accel.sh@75 -- # killprocess 3208523 00:07:20.332 17:54:09 -- common/autotest_common.sh@936 -- # '[' -z 3208523 ']' 00:07:20.332 17:54:09 -- common/autotest_common.sh@940 -- # kill -0 3208523 00:07:20.332 17:54:09 -- common/autotest_common.sh@941 -- # uname 00:07:20.332 17:54:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:20.332 17:54:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3208523 00:07:20.332 17:54:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:20.332 17:54:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:20.332 17:54:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3208523' 00:07:20.332 killing process with pid 3208523 00:07:20.332 17:54:09 -- common/autotest_common.sh@955 -- # kill 3208523 00:07:20.332 17:54:09 -- common/autotest_common.sh@960 -- # wait 3208523 00:07:20.899 17:54:09 -- accel/accel.sh@76 -- # trap - ERR 00:07:20.899 17:54:09 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:20.899 17:54:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:20.899 17:54:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.899 17:54:09 -- common/autotest_common.sh@10 -- # set +x 00:07:20.899 17:54:09 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:07:20.899 17:54:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:20.899 17:54:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.899 17:54:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.899 17:54:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.899 17:54:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.899 17:54:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.899 17:54:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.899 17:54:09 -- accel/accel.sh@40 -- # local IFS=, 00:07:20.899 17:54:09 -- accel/accel.sh@41 -- # jq -r . 00:07:20.899 17:54:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:20.899 17:54:09 -- common/autotest_common.sh@10 -- # set +x 00:07:20.899 17:54:09 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:20.899 17:54:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:20.899 17:54:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.899 17:54:09 -- common/autotest_common.sh@10 -- # set +x 00:07:21.158 ************************************ 00:07:21.158 START TEST accel_missing_filename 00:07:21.158 ************************************ 00:07:21.158 17:54:09 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:07:21.158 17:54:09 -- common/autotest_common.sh@638 -- # local es=0 00:07:21.158 17:54:09 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:21.158 17:54:09 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:21.158 17:54:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:21.158 17:54:09 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:21.158 17:54:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:21.158 17:54:09 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:07:21.158 17:54:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:21.158 17:54:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.158 17:54:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.158 17:54:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.158 17:54:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.158 17:54:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.158 17:54:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.158 17:54:09 -- accel/accel.sh@40 -- # local IFS=, 00:07:21.158 17:54:09 -- accel/accel.sh@41 -- # jq -r . 00:07:21.158 [2024-04-15 17:54:09.901620] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:21.158 [2024-04-15 17:54:09.901695] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208962 ] 00:07:21.158 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.158 [2024-04-15 17:54:09.969894] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.158 [2024-04-15 17:54:10.072878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.158 [2024-04-15 17:54:10.073574] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:21.416 [2024-04-15 17:54:10.136887] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.416 [2024-04-15 17:54:10.226195] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:21.416 A filename is required. 00:07:21.416 17:54:10 -- common/autotest_common.sh@641 -- # es=234 00:07:21.416 17:54:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:21.416 17:54:10 -- common/autotest_common.sh@650 -- # es=106 00:07:21.416 17:54:10 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:21.416 17:54:10 -- common/autotest_common.sh@658 -- # es=1 00:07:21.416 17:54:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:21.416 00:07:21.416 real 0m0.424s 00:07:21.416 user 0m0.311s 00:07:21.416 sys 0m0.154s 00:07:21.416 17:54:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:21.416 17:54:10 -- common/autotest_common.sh@10 -- # set +x 00:07:21.416 ************************************ 00:07:21.416 END TEST accel_missing_filename 00:07:21.416 ************************************ 00:07:21.416 17:54:10 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.416 17:54:10 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:21.416 17:54:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.416 17:54:10 -- common/autotest_common.sh@10 -- # set +x 00:07:21.676 ************************************ 00:07:21.676 START TEST accel_compress_verify 00:07:21.676 ************************************ 00:07:21.677 17:54:10 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.677 17:54:10 -- common/autotest_common.sh@638 -- # local es=0 00:07:21.677 17:54:10 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.677 17:54:10 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:21.677 17:54:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:21.677 17:54:10 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:21.677 17:54:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:21.677 17:54:10 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.677 17:54:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.677 17:54:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.677 17:54:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.677 17:54:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.677 17:54:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.677 17:54:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.677 17:54:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.677 17:54:10 -- accel/accel.sh@40 -- # local IFS=, 00:07:21.677 17:54:10 -- accel/accel.sh@41 -- # jq -r . 00:07:21.677 [2024-04-15 17:54:10.451979] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:21.677 [2024-04-15 17:54:10.452045] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209005 ] 00:07:21.677 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.677 [2024-04-15 17:54:10.518915] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.677 [2024-04-15 17:54:10.616361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.677 [2024-04-15 17:54:10.617073] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:21.937 [2024-04-15 17:54:10.678663] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.937 [2024-04-15 17:54:10.765642] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:21.937 00:07:21.937 Compression does not support the verify option, aborting. 00:07:21.937 17:54:10 -- common/autotest_common.sh@641 -- # es=161 00:07:21.937 17:54:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:21.937 17:54:10 -- common/autotest_common.sh@650 -- # es=33 00:07:21.937 17:54:10 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:21.937 17:54:10 -- common/autotest_common.sh@658 -- # es=1 00:07:21.937 17:54:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:21.937 00:07:21.937 real 0m0.415s 00:07:21.937 user 0m0.301s 00:07:21.937 sys 0m0.149s 00:07:21.937 17:54:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:21.937 17:54:10 -- common/autotest_common.sh@10 -- # set +x 00:07:21.937 ************************************ 00:07:21.937 END TEST accel_compress_verify 00:07:21.937 ************************************ 00:07:21.937 17:54:10 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:21.937 17:54:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:21.937 17:54:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.937 17:54:10 -- common/autotest_common.sh@10 -- # set +x 00:07:22.195 ************************************ 00:07:22.195 START TEST accel_wrong_workload 00:07:22.195 ************************************ 00:07:22.195 17:54:10 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:07:22.195 17:54:10 -- common/autotest_common.sh@638 -- # local es=0 00:07:22.195 17:54:10 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:22.195 17:54:10 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:22.195 17:54:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:22.195 17:54:10 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:22.195 17:54:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:22.195 17:54:10 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:07:22.195 17:54:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:22.195 17:54:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.195 17:54:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.195 17:54:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.195 17:54:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.195 17:54:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.195 17:54:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.195 17:54:10 -- accel/accel.sh@40 -- # local IFS=, 00:07:22.195 17:54:10 -- accel/accel.sh@41 -- # jq -r . 00:07:22.195 Unsupported workload type: foobar 00:07:22.195 [2024-04-15 17:54:10.998273] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:22.195 accel_perf options: 00:07:22.195 [-h help message] 00:07:22.195 [-q queue depth per core] 00:07:22.195 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:22.195 [-T number of threads per core 00:07:22.195 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:22.195 [-t time in seconds] 00:07:22.195 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:22.195 [ dif_verify, , dif_generate, dif_generate_copy 00:07:22.195 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:22.195 [-l for compress/decompress workloads, name of uncompressed input file 00:07:22.195 [-S for crc32c workload, use this seed value (default 0) 00:07:22.195 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:22.195 [-f for fill workload, use this BYTE value (default 255) 00:07:22.195 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:22.195 [-y verify result if this switch is on] 00:07:22.195 [-a tasks to allocate per core (default: same value as -q)] 00:07:22.195 Can be used to spread operations across a wider range of memory. 00:07:22.195 17:54:11 -- common/autotest_common.sh@641 -- # es=1 00:07:22.195 17:54:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:22.195 17:54:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:22.195 17:54:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:22.195 00:07:22.195 real 0m0.025s 00:07:22.195 user 0m0.012s 00:07:22.195 sys 0m0.013s 00:07:22.195 17:54:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:22.195 17:54:11 -- common/autotest_common.sh@10 -- # set +x 00:07:22.195 ************************************ 00:07:22.195 END TEST accel_wrong_workload 00:07:22.195 ************************************ 00:07:22.195 17:54:11 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:22.195 17:54:11 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:22.195 17:54:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.195 17:54:11 -- common/autotest_common.sh@10 -- # set +x 00:07:22.195 Error: writing output failed: Broken pipe 00:07:22.454 ************************************ 00:07:22.454 START TEST accel_negative_buffers 00:07:22.454 ************************************ 00:07:22.454 17:54:11 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:22.454 17:54:11 -- common/autotest_common.sh@638 -- # local es=0 00:07:22.454 17:54:11 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:22.454 17:54:11 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:22.454 17:54:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:22.454 17:54:11 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:22.454 17:54:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:22.454 17:54:11 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:07:22.454 17:54:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:22.454 17:54:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.454 17:54:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.454 17:54:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.454 17:54:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.454 17:54:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.454 17:54:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.454 17:54:11 -- accel/accel.sh@40 -- # local IFS=, 00:07:22.454 17:54:11 -- accel/accel.sh@41 -- # jq -r . 00:07:22.454 -x option must be non-negative. 00:07:22.454 [2024-04-15 17:54:11.219746] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:22.454 accel_perf options: 00:07:22.454 [-h help message] 00:07:22.454 [-q queue depth per core] 00:07:22.454 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:22.454 [-T number of threads per core 00:07:22.454 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:22.454 [-t time in seconds] 00:07:22.454 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:22.454 [ dif_verify, , dif_generate, dif_generate_copy 00:07:22.454 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:22.454 [-l for compress/decompress workloads, name of uncompressed input file 00:07:22.454 [-S for crc32c workload, use this seed value (default 0) 00:07:22.454 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:22.454 [-f for fill workload, use this BYTE value (default 255) 00:07:22.454 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:22.454 [-y verify result if this switch is on] 00:07:22.454 [-a tasks to allocate per core (default: same value as -q)] 00:07:22.455 Can be used to spread operations across a wider range of memory. 00:07:22.455 17:54:11 -- common/autotest_common.sh@641 -- # es=1 00:07:22.455 17:54:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:22.455 17:54:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:22.455 17:54:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:22.455 00:07:22.455 real 0m0.043s 00:07:22.455 user 0m0.019s 00:07:22.455 sys 0m0.024s 00:07:22.455 17:54:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:22.455 17:54:11 -- common/autotest_common.sh@10 -- # set +x 00:07:22.455 ************************************ 00:07:22.455 END TEST accel_negative_buffers 00:07:22.455 ************************************ 00:07:22.455 17:54:11 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:22.455 17:54:11 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:22.455 17:54:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.455 17:54:11 -- common/autotest_common.sh@10 -- # set +x 00:07:22.455 Error: writing output failed: Broken pipe 00:07:22.455 ************************************ 00:07:22.455 START TEST accel_crc32c 00:07:22.455 ************************************ 00:07:22.455 17:54:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:22.455 17:54:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.455 17:54:11 -- accel/accel.sh@17 -- # local accel_module 00:07:22.455 17:54:11 -- accel/accel.sh@19 -- # IFS=: 00:07:22.455 17:54:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:22.455 17:54:11 -- accel/accel.sh@19 -- # read -r var val 00:07:22.455 17:54:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:22.455 17:54:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.455 17:54:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.455 17:54:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.455 17:54:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.455 17:54:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.455 17:54:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.455 17:54:11 -- accel/accel.sh@40 -- # local IFS=, 00:07:22.455 17:54:11 -- accel/accel.sh@41 -- # jq -r . 00:07:22.455 [2024-04-15 17:54:11.389491] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:22.455 [2024-04-15 17:54:11.389557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209209 ] 00:07:22.713 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.713 [2024-04-15 17:54:11.465762] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.713 [2024-04-15 17:54:11.560344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.713 [2024-04-15 17:54:11.561002] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:22.713 17:54:11 -- accel/accel.sh@20 -- # val= 00:07:22.713 17:54:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.713 17:54:11 -- accel/accel.sh@19 -- # IFS=: 00:07:22.713 17:54:11 -- accel/accel.sh@19 -- # read -r var val 00:07:22.713 17:54:11 -- accel/accel.sh@20 -- # val= 00:07:22.713 17:54:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.713 17:54:11 -- accel/accel.sh@19 -- # IFS=: 00:07:22.713 17:54:11 -- accel/accel.sh@19 -- # read -r var val 00:07:22.713 17:54:11 -- accel/accel.sh@20 -- # val=0x1 00:07:22.713 17:54:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.713 17:54:11 -- accel/accel.sh@19 -- # IFS=: 00:07:22.713 17:54:11 -- accel/accel.sh@19 -- # read -r var val 00:07:22.713 17:54:11 -- accel/accel.sh@20 -- # val= 00:07:22.713 17:54:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.713 17:54:11 -- accel/accel.sh@19 -- # IFS=: 00:07:22.713 17:54:11 -- accel/accel.sh@19 -- # read -r var val 00:07:22.713 17:54:11 -- accel/accel.sh@20 -- # val= 00:07:22.713 17:54:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # IFS=: 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # read -r var val 00:07:22.714 17:54:11 -- accel/accel.sh@20 -- # val=crc32c 00:07:22.714 17:54:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.714 17:54:11 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # IFS=: 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # read -r var val 00:07:22.714 17:54:11 -- accel/accel.sh@20 -- # val=32 00:07:22.714 17:54:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # IFS=: 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # read -r var val 00:07:22.714 17:54:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.714 17:54:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # IFS=: 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # read -r var val 00:07:22.714 17:54:11 -- accel/accel.sh@20 -- # val= 00:07:22.714 17:54:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # IFS=: 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # read -r var val 00:07:22.714 17:54:11 -- accel/accel.sh@20 -- # val=software 00:07:22.714 17:54:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.714 17:54:11 -- accel/accel.sh@22 -- # accel_module=software 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # IFS=: 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # read -r var val 00:07:22.714 17:54:11 -- accel/accel.sh@20 -- # val=32 00:07:22.714 17:54:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # IFS=: 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # read -r var val 00:07:22.714 17:54:11 -- accel/accel.sh@20 -- # val=32 00:07:22.714 17:54:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # IFS=: 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # read -r var val 00:07:22.714 17:54:11 -- accel/accel.sh@20 -- # val=1 00:07:22.714 17:54:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # IFS=: 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # read -r var val 00:07:22.714 17:54:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.714 17:54:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # IFS=: 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # read -r var val 00:07:22.714 17:54:11 -- accel/accel.sh@20 -- # val=Yes 00:07:22.714 17:54:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # IFS=: 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # read -r var val 00:07:22.714 17:54:11 -- accel/accel.sh@20 -- # val= 00:07:22.714 17:54:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # IFS=: 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # read -r var val 00:07:22.714 17:54:11 -- accel/accel.sh@20 -- # val= 00:07:22.714 17:54:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # IFS=: 00:07:22.714 17:54:11 -- accel/accel.sh@19 -- # read -r var val 00:07:24.087 17:54:12 -- accel/accel.sh@20 -- # val= 00:07:24.087 17:54:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.087 17:54:12 -- accel/accel.sh@19 -- # IFS=: 00:07:24.087 17:54:12 -- accel/accel.sh@19 -- # read -r var val 00:07:24.087 17:54:12 -- accel/accel.sh@20 -- # val= 00:07:24.087 17:54:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.087 17:54:12 -- accel/accel.sh@19 -- # IFS=: 00:07:24.087 17:54:12 -- accel/accel.sh@19 -- # read -r var val 00:07:24.087 17:54:12 -- accel/accel.sh@20 -- # val= 00:07:24.087 17:54:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.087 17:54:12 -- accel/accel.sh@19 -- # IFS=: 00:07:24.087 17:54:12 -- accel/accel.sh@19 -- # read -r var val 00:07:24.087 17:54:12 -- accel/accel.sh@20 -- # val= 00:07:24.087 17:54:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.087 17:54:12 -- accel/accel.sh@19 -- # IFS=: 00:07:24.087 17:54:12 -- accel/accel.sh@19 -- # read -r var val 00:07:24.087 17:54:12 -- accel/accel.sh@20 -- # val= 00:07:24.087 17:54:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.087 17:54:12 -- accel/accel.sh@19 -- # IFS=: 00:07:24.088 17:54:12 -- accel/accel.sh@19 -- # read -r var val 00:07:24.088 17:54:12 -- accel/accel.sh@20 -- # val= 00:07:24.088 17:54:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.088 17:54:12 -- accel/accel.sh@19 -- # IFS=: 00:07:24.088 17:54:12 -- accel/accel.sh@19 -- # read -r var val 00:07:24.088 17:54:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.088 17:54:12 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:24.088 17:54:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.088 00:07:24.088 real 0m1.422s 00:07:24.088 user 0m1.265s 00:07:24.088 sys 0m0.159s 00:07:24.088 17:54:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:24.088 17:54:12 -- common/autotest_common.sh@10 -- # set +x 00:07:24.088 ************************************ 00:07:24.088 END TEST accel_crc32c 00:07:24.088 ************************************ 00:07:24.088 17:54:12 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:24.088 17:54:12 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:24.088 17:54:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.088 17:54:12 -- common/autotest_common.sh@10 -- # set +x 00:07:24.088 ************************************ 00:07:24.088 START TEST accel_crc32c_C2 00:07:24.088 ************************************ 00:07:24.088 17:54:12 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:24.088 17:54:12 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.088 17:54:12 -- accel/accel.sh@17 -- # local accel_module 00:07:24.088 17:54:12 -- accel/accel.sh@19 -- # IFS=: 00:07:24.088 17:54:12 -- accel/accel.sh@19 -- # read -r var val 00:07:24.088 17:54:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:24.088 17:54:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:24.088 17:54:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.088 17:54:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.088 17:54:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.088 17:54:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.088 17:54:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.088 17:54:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.088 17:54:12 -- accel/accel.sh@40 -- # local IFS=, 00:07:24.088 17:54:12 -- accel/accel.sh@41 -- # jq -r . 00:07:24.088 [2024-04-15 17:54:12.939644] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:24.088 [2024-04-15 17:54:12.939709] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209391 ] 00:07:24.088 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.088 [2024-04-15 17:54:13.012241] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.346 [2024-04-15 17:54:13.107189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.346 [2024-04-15 17:54:13.107881] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:24.346 17:54:13 -- accel/accel.sh@20 -- # val= 00:07:24.346 17:54:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.346 17:54:13 -- accel/accel.sh@19 -- # IFS=: 00:07:24.346 17:54:13 -- accel/accel.sh@19 -- # read -r var val 00:07:24.346 17:54:13 -- accel/accel.sh@20 -- # val= 00:07:24.346 17:54:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.346 17:54:13 -- accel/accel.sh@19 -- # IFS=: 00:07:24.346 17:54:13 -- accel/accel.sh@19 -- # read -r var val 00:07:24.346 17:54:13 -- accel/accel.sh@20 -- # val=0x1 00:07:24.346 17:54:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.346 17:54:13 -- accel/accel.sh@19 -- # IFS=: 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # read -r var val 00:07:24.347 17:54:13 -- accel/accel.sh@20 -- # val= 00:07:24.347 17:54:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # IFS=: 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # read -r var val 00:07:24.347 17:54:13 -- accel/accel.sh@20 -- # val= 00:07:24.347 17:54:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # IFS=: 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # read -r var val 00:07:24.347 17:54:13 -- accel/accel.sh@20 -- # val=crc32c 00:07:24.347 17:54:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.347 17:54:13 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # IFS=: 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # read -r var val 00:07:24.347 17:54:13 -- accel/accel.sh@20 -- # val=0 00:07:24.347 17:54:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # IFS=: 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # read -r var val 00:07:24.347 17:54:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.347 17:54:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # IFS=: 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # read -r var val 00:07:24.347 17:54:13 -- accel/accel.sh@20 -- # val= 00:07:24.347 17:54:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # IFS=: 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # read -r var val 00:07:24.347 17:54:13 -- accel/accel.sh@20 -- # val=software 00:07:24.347 17:54:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.347 17:54:13 -- accel/accel.sh@22 -- # accel_module=software 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # IFS=: 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # read -r var val 00:07:24.347 17:54:13 -- accel/accel.sh@20 -- # val=32 00:07:24.347 17:54:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # IFS=: 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # read -r var val 00:07:24.347 17:54:13 -- accel/accel.sh@20 -- # val=32 00:07:24.347 17:54:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # IFS=: 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # read -r var val 00:07:24.347 17:54:13 -- accel/accel.sh@20 -- # val=1 00:07:24.347 17:54:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # IFS=: 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # read -r var val 00:07:24.347 17:54:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.347 17:54:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # IFS=: 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # read -r var val 00:07:24.347 17:54:13 -- accel/accel.sh@20 -- # val=Yes 00:07:24.347 17:54:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # IFS=: 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # read -r var val 00:07:24.347 17:54:13 -- accel/accel.sh@20 -- # val= 00:07:24.347 17:54:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # IFS=: 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # read -r var val 00:07:24.347 17:54:13 -- accel/accel.sh@20 -- # val= 00:07:24.347 17:54:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # IFS=: 00:07:24.347 17:54:13 -- accel/accel.sh@19 -- # read -r var val 00:07:25.721 17:54:14 -- accel/accel.sh@20 -- # val= 00:07:25.721 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.721 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.721 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.721 17:54:14 -- accel/accel.sh@20 -- # val= 00:07:25.721 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.721 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.721 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.721 17:54:14 -- accel/accel.sh@20 -- # val= 00:07:25.721 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.721 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.722 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.722 17:54:14 -- accel/accel.sh@20 -- # val= 00:07:25.722 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.722 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.722 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.722 17:54:14 -- accel/accel.sh@20 -- # val= 00:07:25.722 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.722 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.722 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.722 17:54:14 -- accel/accel.sh@20 -- # val= 00:07:25.722 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.722 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.722 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.722 17:54:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.722 17:54:14 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:25.722 17:54:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.722 00:07:25.722 real 0m1.427s 00:07:25.722 user 0m1.278s 00:07:25.722 sys 0m0.150s 00:07:25.722 17:54:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:25.722 17:54:14 -- common/autotest_common.sh@10 -- # set +x 00:07:25.722 ************************************ 00:07:25.722 END TEST accel_crc32c_C2 00:07:25.722 ************************************ 00:07:25.722 17:54:14 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:25.722 17:54:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:25.722 17:54:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.722 17:54:14 -- common/autotest_common.sh@10 -- # set +x 00:07:25.722 ************************************ 00:07:25.722 START TEST accel_copy 00:07:25.722 ************************************ 00:07:25.722 17:54:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:07:25.722 17:54:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.722 17:54:14 -- accel/accel.sh@17 -- # local accel_module 00:07:25.722 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.722 17:54:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:25.722 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.722 17:54:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:25.722 17:54:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.722 17:54:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.722 17:54:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.722 17:54:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.722 17:54:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.722 17:54:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.722 17:54:14 -- accel/accel.sh@40 -- # local IFS=, 00:07:25.722 17:54:14 -- accel/accel.sh@41 -- # jq -r . 00:07:25.722 [2024-04-15 17:54:14.488219] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:25.722 [2024-04-15 17:54:14.488283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209657 ] 00:07:25.722 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.722 [2024-04-15 17:54:14.555842] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.722 [2024-04-15 17:54:14.650676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.722 [2024-04-15 17:54:14.651382] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:25.981 17:54:14 -- accel/accel.sh@20 -- # val= 00:07:25.981 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.981 17:54:14 -- accel/accel.sh@20 -- # val= 00:07:25.981 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.981 17:54:14 -- accel/accel.sh@20 -- # val=0x1 00:07:25.981 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.981 17:54:14 -- accel/accel.sh@20 -- # val= 00:07:25.981 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.981 17:54:14 -- accel/accel.sh@20 -- # val= 00:07:25.981 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.981 17:54:14 -- accel/accel.sh@20 -- # val=copy 00:07:25.981 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.981 17:54:14 -- accel/accel.sh@23 -- # accel_opc=copy 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.981 17:54:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.981 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.981 17:54:14 -- accel/accel.sh@20 -- # val= 00:07:25.981 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.981 17:54:14 -- accel/accel.sh@20 -- # val=software 00:07:25.981 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.981 17:54:14 -- accel/accel.sh@22 -- # accel_module=software 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.981 17:54:14 -- accel/accel.sh@20 -- # val=32 00:07:25.981 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.981 17:54:14 -- accel/accel.sh@20 -- # val=32 00:07:25.981 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.981 17:54:14 -- accel/accel.sh@20 -- # val=1 00:07:25.981 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.981 17:54:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.981 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.981 17:54:14 -- accel/accel.sh@20 -- # val=Yes 00:07:25.981 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.981 17:54:14 -- accel/accel.sh@20 -- # val= 00:07:25.981 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:25.981 17:54:14 -- accel/accel.sh@20 -- # val= 00:07:25.981 17:54:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # IFS=: 00:07:25.981 17:54:14 -- accel/accel.sh@19 -- # read -r var val 00:07:27.355 17:54:15 -- accel/accel.sh@20 -- # val= 00:07:27.355 17:54:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.355 17:54:15 -- accel/accel.sh@19 -- # IFS=: 00:07:27.355 17:54:15 -- accel/accel.sh@19 -- # read -r var val 00:07:27.355 17:54:15 -- accel/accel.sh@20 -- # val= 00:07:27.355 17:54:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.355 17:54:15 -- accel/accel.sh@19 -- # IFS=: 00:07:27.355 17:54:15 -- accel/accel.sh@19 -- # read -r var val 00:07:27.355 17:54:15 -- accel/accel.sh@20 -- # val= 00:07:27.355 17:54:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.355 17:54:15 -- accel/accel.sh@19 -- # IFS=: 00:07:27.355 17:54:15 -- accel/accel.sh@19 -- # read -r var val 00:07:27.355 17:54:15 -- accel/accel.sh@20 -- # val= 00:07:27.355 17:54:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.355 17:54:15 -- accel/accel.sh@19 -- # IFS=: 00:07:27.355 17:54:15 -- accel/accel.sh@19 -- # read -r var val 00:07:27.355 17:54:15 -- accel/accel.sh@20 -- # val= 00:07:27.355 17:54:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.355 17:54:15 -- accel/accel.sh@19 -- # IFS=: 00:07:27.355 17:54:15 -- accel/accel.sh@19 -- # read -r var val 00:07:27.355 17:54:15 -- accel/accel.sh@20 -- # val= 00:07:27.355 17:54:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.355 17:54:15 -- accel/accel.sh@19 -- # IFS=: 00:07:27.355 17:54:15 -- accel/accel.sh@19 -- # read -r var val 00:07:27.355 17:54:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.355 17:54:15 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:27.355 17:54:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.355 00:07:27.355 real 0m1.412s 00:07:27.355 user 0m1.255s 00:07:27.355 sys 0m0.158s 00:07:27.355 17:54:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:27.355 17:54:15 -- common/autotest_common.sh@10 -- # set +x 00:07:27.355 ************************************ 00:07:27.355 END TEST accel_copy 00:07:27.355 ************************************ 00:07:27.355 17:54:15 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.355 17:54:15 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:27.355 17:54:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.355 17:54:15 -- common/autotest_common.sh@10 -- # set +x 00:07:27.355 ************************************ 00:07:27.355 START TEST accel_fill 00:07:27.355 ************************************ 00:07:27.355 17:54:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.355 17:54:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.356 17:54:16 -- accel/accel.sh@17 -- # local accel_module 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # IFS=: 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # read -r var val 00:07:27.356 17:54:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.356 17:54:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.356 17:54:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.356 17:54:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.356 17:54:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.356 17:54:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.356 17:54:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.356 17:54:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.356 17:54:16 -- accel/accel.sh@40 -- # local IFS=, 00:07:27.356 17:54:16 -- accel/accel.sh@41 -- # jq -r . 00:07:27.356 [2024-04-15 17:54:16.038699] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:27.356 [2024-04-15 17:54:16.038775] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209823 ] 00:07:27.356 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.356 [2024-04-15 17:54:16.118759] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.356 [2024-04-15 17:54:16.212787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.356 [2024-04-15 17:54:16.213457] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:27.356 17:54:16 -- accel/accel.sh@20 -- # val= 00:07:27.356 17:54:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # IFS=: 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # read -r var val 00:07:27.356 17:54:16 -- accel/accel.sh@20 -- # val= 00:07:27.356 17:54:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # IFS=: 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # read -r var val 00:07:27.356 17:54:16 -- accel/accel.sh@20 -- # val=0x1 00:07:27.356 17:54:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # IFS=: 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # read -r var val 00:07:27.356 17:54:16 -- accel/accel.sh@20 -- # val= 00:07:27.356 17:54:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # IFS=: 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # read -r var val 00:07:27.356 17:54:16 -- accel/accel.sh@20 -- # val= 00:07:27.356 17:54:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # IFS=: 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # read -r var val 00:07:27.356 17:54:16 -- accel/accel.sh@20 -- # val=fill 00:07:27.356 17:54:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.356 17:54:16 -- accel/accel.sh@23 -- # accel_opc=fill 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # IFS=: 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # read -r var val 00:07:27.356 17:54:16 -- accel/accel.sh@20 -- # val=0x80 00:07:27.356 17:54:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # IFS=: 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # read -r var val 00:07:27.356 17:54:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.356 17:54:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # IFS=: 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # read -r var val 00:07:27.356 17:54:16 -- accel/accel.sh@20 -- # val= 00:07:27.356 17:54:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # IFS=: 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # read -r var val 00:07:27.356 17:54:16 -- accel/accel.sh@20 -- # val=software 00:07:27.356 17:54:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.356 17:54:16 -- accel/accel.sh@22 -- # accel_module=software 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # IFS=: 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # read -r var val 00:07:27.356 17:54:16 -- accel/accel.sh@20 -- # val=64 00:07:27.356 17:54:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # IFS=: 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # read -r var val 00:07:27.356 17:54:16 -- accel/accel.sh@20 -- # val=64 00:07:27.356 17:54:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # IFS=: 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # read -r var val 00:07:27.356 17:54:16 -- accel/accel.sh@20 -- # val=1 00:07:27.356 17:54:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # IFS=: 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # read -r var val 00:07:27.356 17:54:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.356 17:54:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # IFS=: 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # read -r var val 00:07:27.356 17:54:16 -- accel/accel.sh@20 -- # val=Yes 00:07:27.356 17:54:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # IFS=: 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # read -r var val 00:07:27.356 17:54:16 -- accel/accel.sh@20 -- # val= 00:07:27.356 17:54:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # IFS=: 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # read -r var val 00:07:27.356 17:54:16 -- accel/accel.sh@20 -- # val= 00:07:27.356 17:54:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # IFS=: 00:07:27.356 17:54:16 -- accel/accel.sh@19 -- # read -r var val 00:07:28.730 17:54:17 -- accel/accel.sh@20 -- # val= 00:07:28.730 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.730 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.730 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.730 17:54:17 -- accel/accel.sh@20 -- # val= 00:07:28.730 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.730 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.730 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.730 17:54:17 -- accel/accel.sh@20 -- # val= 00:07:28.730 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.730 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.730 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.730 17:54:17 -- accel/accel.sh@20 -- # val= 00:07:28.730 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.730 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.730 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.730 17:54:17 -- accel/accel.sh@20 -- # val= 00:07:28.730 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.730 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.730 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.730 17:54:17 -- accel/accel.sh@20 -- # val= 00:07:28.730 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.730 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.730 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.730 17:54:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.730 17:54:17 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:28.730 17:54:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.730 00:07:28.730 real 0m1.435s 00:07:28.730 user 0m1.270s 00:07:28.730 sys 0m0.166s 00:07:28.730 17:54:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:28.730 17:54:17 -- common/autotest_common.sh@10 -- # set +x 00:07:28.730 ************************************ 00:07:28.730 END TEST accel_fill 00:07:28.730 ************************************ 00:07:28.730 17:54:17 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:28.730 17:54:17 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:28.730 17:54:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.730 17:54:17 -- common/autotest_common.sh@10 -- # set +x 00:07:28.730 ************************************ 00:07:28.730 START TEST accel_copy_crc32c 00:07:28.730 ************************************ 00:07:28.730 17:54:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:07:28.730 17:54:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.730 17:54:17 -- accel/accel.sh@17 -- # local accel_module 00:07:28.730 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.730 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.730 17:54:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:28.730 17:54:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:28.730 17:54:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.730 17:54:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.730 17:54:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.730 17:54:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.730 17:54:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.730 17:54:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.730 17:54:17 -- accel/accel.sh@40 -- # local IFS=, 00:07:28.730 17:54:17 -- accel/accel.sh@41 -- # jq -r . 00:07:28.730 [2024-04-15 17:54:17.593881] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:28.730 [2024-04-15 17:54:17.593945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210023 ] 00:07:28.730 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.730 [2024-04-15 17:54:17.661222] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.988 [2024-04-15 17:54:17.756830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.988 [2024-04-15 17:54:17.757524] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:28.988 17:54:17 -- accel/accel.sh@20 -- # val= 00:07:28.988 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.988 17:54:17 -- accel/accel.sh@20 -- # val= 00:07:28.988 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.988 17:54:17 -- accel/accel.sh@20 -- # val=0x1 00:07:28.988 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.988 17:54:17 -- accel/accel.sh@20 -- # val= 00:07:28.988 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.988 17:54:17 -- accel/accel.sh@20 -- # val= 00:07:28.988 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.988 17:54:17 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:28.988 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.988 17:54:17 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.988 17:54:17 -- accel/accel.sh@20 -- # val=0 00:07:28.988 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.988 17:54:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.988 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.988 17:54:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.988 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.988 17:54:17 -- accel/accel.sh@20 -- # val= 00:07:28.988 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.988 17:54:17 -- accel/accel.sh@20 -- # val=software 00:07:28.988 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.988 17:54:17 -- accel/accel.sh@22 -- # accel_module=software 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.988 17:54:17 -- accel/accel.sh@20 -- # val=32 00:07:28.988 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.988 17:54:17 -- accel/accel.sh@20 -- # val=32 00:07:28.988 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.988 17:54:17 -- accel/accel.sh@20 -- # val=1 00:07:28.988 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.988 17:54:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.988 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.988 17:54:17 -- accel/accel.sh@20 -- # val=Yes 00:07:28.988 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.988 17:54:17 -- accel/accel.sh@20 -- # val= 00:07:28.988 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.988 17:54:17 -- accel/accel.sh@20 -- # val= 00:07:28.988 17:54:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # IFS=: 00:07:28.988 17:54:17 -- accel/accel.sh@19 -- # read -r var val 00:07:30.360 17:54:18 -- accel/accel.sh@20 -- # val= 00:07:30.360 17:54:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.360 17:54:18 -- accel/accel.sh@19 -- # IFS=: 00:07:30.360 17:54:18 -- accel/accel.sh@19 -- # read -r var val 00:07:30.360 17:54:18 -- accel/accel.sh@20 -- # val= 00:07:30.360 17:54:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.360 17:54:18 -- accel/accel.sh@19 -- # IFS=: 00:07:30.360 17:54:18 -- accel/accel.sh@19 -- # read -r var val 00:07:30.360 17:54:18 -- accel/accel.sh@20 -- # val= 00:07:30.360 17:54:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.360 17:54:18 -- accel/accel.sh@19 -- # IFS=: 00:07:30.360 17:54:18 -- accel/accel.sh@19 -- # read -r var val 00:07:30.360 17:54:18 -- accel/accel.sh@20 -- # val= 00:07:30.360 17:54:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.360 17:54:18 -- accel/accel.sh@19 -- # IFS=: 00:07:30.360 17:54:18 -- accel/accel.sh@19 -- # read -r var val 00:07:30.360 17:54:18 -- accel/accel.sh@20 -- # val= 00:07:30.360 17:54:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.360 17:54:18 -- accel/accel.sh@19 -- # IFS=: 00:07:30.360 17:54:18 -- accel/accel.sh@19 -- # read -r var val 00:07:30.360 17:54:18 -- accel/accel.sh@20 -- # val= 00:07:30.360 17:54:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.360 17:54:18 -- accel/accel.sh@19 -- # IFS=: 00:07:30.360 17:54:18 -- accel/accel.sh@19 -- # read -r var val 00:07:30.360 17:54:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.360 17:54:18 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:30.360 17:54:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.360 00:07:30.360 real 0m1.422s 00:07:30.360 user 0m1.264s 00:07:30.360 sys 0m0.160s 00:07:30.360 17:54:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:30.360 17:54:18 -- common/autotest_common.sh@10 -- # set +x 00:07:30.360 ************************************ 00:07:30.360 END TEST accel_copy_crc32c 00:07:30.360 ************************************ 00:07:30.360 17:54:19 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:30.360 17:54:19 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:30.360 17:54:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.360 17:54:19 -- common/autotest_common.sh@10 -- # set +x 00:07:30.360 ************************************ 00:07:30.360 START TEST accel_copy_crc32c_C2 00:07:30.360 ************************************ 00:07:30.360 17:54:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:30.360 17:54:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.360 17:54:19 -- accel/accel.sh@17 -- # local accel_module 00:07:30.360 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.360 17:54:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:30.360 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:30.360 17:54:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:30.360 17:54:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.360 17:54:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.360 17:54:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.360 17:54:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.360 17:54:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.360 17:54:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.360 17:54:19 -- accel/accel.sh@40 -- # local IFS=, 00:07:30.360 17:54:19 -- accel/accel.sh@41 -- # jq -r . 00:07:30.360 [2024-04-15 17:54:19.136307] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:30.360 [2024-04-15 17:54:19.136371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210266 ] 00:07:30.360 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.360 [2024-04-15 17:54:19.202494] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.360 [2024-04-15 17:54:19.295979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.360 [2024-04-15 17:54:19.296730] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:30.620 17:54:19 -- accel/accel.sh@20 -- # val= 00:07:30.620 17:54:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 17:54:19 -- accel/accel.sh@20 -- # val= 00:07:30.620 17:54:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 17:54:19 -- accel/accel.sh@20 -- # val=0x1 00:07:30.620 17:54:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 17:54:19 -- accel/accel.sh@20 -- # val= 00:07:30.620 17:54:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 17:54:19 -- accel/accel.sh@20 -- # val= 00:07:30.620 17:54:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 17:54:19 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:30.620 17:54:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 17:54:19 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 17:54:19 -- accel/accel.sh@20 -- # val=0 00:07:30.620 17:54:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 17:54:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.620 17:54:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 17:54:19 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:30.620 17:54:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 17:54:19 -- accel/accel.sh@20 -- # val= 00:07:30.620 17:54:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 17:54:19 -- accel/accel.sh@20 -- # val=software 00:07:30.620 17:54:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 17:54:19 -- accel/accel.sh@22 -- # accel_module=software 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 17:54:19 -- accel/accel.sh@20 -- # val=32 00:07:30.620 17:54:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 17:54:19 -- accel/accel.sh@20 -- # val=32 00:07:30.620 17:54:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 17:54:19 -- accel/accel.sh@20 -- # val=1 00:07:30.620 17:54:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 17:54:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.620 17:54:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 17:54:19 -- accel/accel.sh@20 -- # val=Yes 00:07:30.620 17:54:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 17:54:19 -- accel/accel.sh@20 -- # val= 00:07:30.620 17:54:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 17:54:19 -- accel/accel.sh@20 -- # val= 00:07:30.620 17:54:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 17:54:19 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val= 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val= 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val= 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val= 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val= 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val= 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.995 17:54:20 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:31.995 17:54:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.995 00:07:31.995 real 0m1.421s 00:07:31.995 user 0m1.274s 00:07:31.995 sys 0m0.149s 00:07:31.995 17:54:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:31.995 17:54:20 -- common/autotest_common.sh@10 -- # set +x 00:07:31.995 ************************************ 00:07:31.995 END TEST accel_copy_crc32c_C2 00:07:31.995 ************************************ 00:07:31.995 17:54:20 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:31.995 17:54:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:31.995 17:54:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.995 17:54:20 -- common/autotest_common.sh@10 -- # set +x 00:07:31.995 ************************************ 00:07:31.995 START TEST accel_dualcast 00:07:31.995 ************************************ 00:07:31.995 17:54:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:07:31.995 17:54:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.995 17:54:20 -- accel/accel.sh@17 -- # local accel_module 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:31.995 17:54:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.995 17:54:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.995 17:54:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.995 17:54:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.995 17:54:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.995 17:54:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.995 17:54:20 -- accel/accel.sh@40 -- # local IFS=, 00:07:31.995 17:54:20 -- accel/accel.sh@41 -- # jq -r . 00:07:31.995 [2024-04-15 17:54:20.674072] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:31.995 [2024-04-15 17:54:20.674140] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210423 ] 00:07:31.995 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.995 [2024-04-15 17:54:20.741135] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.995 [2024-04-15 17:54:20.835306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.995 [2024-04-15 17:54:20.835953] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val= 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val= 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val=0x1 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val= 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val= 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val=dualcast 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val= 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val=software 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@22 -- # accel_module=software 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val=32 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val=32 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val=1 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val=Yes 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val= 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:31.995 17:54:20 -- accel/accel.sh@20 -- # val= 00:07:31.995 17:54:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # IFS=: 00:07:31.995 17:54:20 -- accel/accel.sh@19 -- # read -r var val 00:07:33.368 17:54:22 -- accel/accel.sh@20 -- # val= 00:07:33.368 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.368 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.368 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.368 17:54:22 -- accel/accel.sh@20 -- # val= 00:07:33.368 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.368 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.368 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.368 17:54:22 -- accel/accel.sh@20 -- # val= 00:07:33.368 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.368 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.368 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.368 17:54:22 -- accel/accel.sh@20 -- # val= 00:07:33.368 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.368 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.368 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.368 17:54:22 -- accel/accel.sh@20 -- # val= 00:07:33.368 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.368 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.368 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.368 17:54:22 -- accel/accel.sh@20 -- # val= 00:07:33.368 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.368 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.368 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.368 17:54:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.368 17:54:22 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:33.368 17:54:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.368 00:07:33.368 real 0m1.416s 00:07:33.368 user 0m1.275s 00:07:33.368 sys 0m0.143s 00:07:33.368 17:54:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:33.368 17:54:22 -- common/autotest_common.sh@10 -- # set +x 00:07:33.368 ************************************ 00:07:33.368 END TEST accel_dualcast 00:07:33.368 ************************************ 00:07:33.368 17:54:22 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:33.368 17:54:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:33.368 17:54:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.368 17:54:22 -- common/autotest_common.sh@10 -- # set +x 00:07:33.368 ************************************ 00:07:33.368 START TEST accel_compare 00:07:33.368 ************************************ 00:07:33.368 17:54:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:07:33.368 17:54:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.368 17:54:22 -- accel/accel.sh@17 -- # local accel_module 00:07:33.368 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.368 17:54:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:33.368 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.368 17:54:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:33.368 17:54:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.368 17:54:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.368 17:54:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.368 17:54:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.368 17:54:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.368 17:54:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.368 17:54:22 -- accel/accel.sh@40 -- # local IFS=, 00:07:33.368 17:54:22 -- accel/accel.sh@41 -- # jq -r . 00:07:33.368 [2024-04-15 17:54:22.210615] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:33.368 [2024-04-15 17:54:22.210682] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210661 ] 00:07:33.368 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.369 [2024-04-15 17:54:22.278690] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.626 [2024-04-15 17:54:22.371288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.626 [2024-04-15 17:54:22.372003] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:33.626 17:54:22 -- accel/accel.sh@20 -- # val= 00:07:33.626 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.626 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.626 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.626 17:54:22 -- accel/accel.sh@20 -- # val= 00:07:33.627 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.627 17:54:22 -- accel/accel.sh@20 -- # val=0x1 00:07:33.627 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.627 17:54:22 -- accel/accel.sh@20 -- # val= 00:07:33.627 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.627 17:54:22 -- accel/accel.sh@20 -- # val= 00:07:33.627 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.627 17:54:22 -- accel/accel.sh@20 -- # val=compare 00:07:33.627 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.627 17:54:22 -- accel/accel.sh@23 -- # accel_opc=compare 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.627 17:54:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.627 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.627 17:54:22 -- accel/accel.sh@20 -- # val= 00:07:33.627 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.627 17:54:22 -- accel/accel.sh@20 -- # val=software 00:07:33.627 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.627 17:54:22 -- accel/accel.sh@22 -- # accel_module=software 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.627 17:54:22 -- accel/accel.sh@20 -- # val=32 00:07:33.627 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.627 17:54:22 -- accel/accel.sh@20 -- # val=32 00:07:33.627 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.627 17:54:22 -- accel/accel.sh@20 -- # val=1 00:07:33.627 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.627 17:54:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.627 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.627 17:54:22 -- accel/accel.sh@20 -- # val=Yes 00:07:33.627 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.627 17:54:22 -- accel/accel.sh@20 -- # val= 00:07:33.627 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:33.627 17:54:22 -- accel/accel.sh@20 -- # val= 00:07:33.627 17:54:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # IFS=: 00:07:33.627 17:54:22 -- accel/accel.sh@19 -- # read -r var val 00:07:35.001 17:54:23 -- accel/accel.sh@20 -- # val= 00:07:35.001 17:54:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.001 17:54:23 -- accel/accel.sh@19 -- # IFS=: 00:07:35.001 17:54:23 -- accel/accel.sh@19 -- # read -r var val 00:07:35.001 17:54:23 -- accel/accel.sh@20 -- # val= 00:07:35.001 17:54:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.001 17:54:23 -- accel/accel.sh@19 -- # IFS=: 00:07:35.001 17:54:23 -- accel/accel.sh@19 -- # read -r var val 00:07:35.001 17:54:23 -- accel/accel.sh@20 -- # val= 00:07:35.001 17:54:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.001 17:54:23 -- accel/accel.sh@19 -- # IFS=: 00:07:35.001 17:54:23 -- accel/accel.sh@19 -- # read -r var val 00:07:35.001 17:54:23 -- accel/accel.sh@20 -- # val= 00:07:35.001 17:54:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.001 17:54:23 -- accel/accel.sh@19 -- # IFS=: 00:07:35.001 17:54:23 -- accel/accel.sh@19 -- # read -r var val 00:07:35.001 17:54:23 -- accel/accel.sh@20 -- # val= 00:07:35.001 17:54:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.001 17:54:23 -- accel/accel.sh@19 -- # IFS=: 00:07:35.001 17:54:23 -- accel/accel.sh@19 -- # read -r var val 00:07:35.001 17:54:23 -- accel/accel.sh@20 -- # val= 00:07:35.001 17:54:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.001 17:54:23 -- accel/accel.sh@19 -- # IFS=: 00:07:35.001 17:54:23 -- accel/accel.sh@19 -- # read -r var val 00:07:35.001 17:54:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.001 17:54:23 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:35.001 17:54:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.001 00:07:35.001 real 0m1.419s 00:07:35.001 user 0m1.266s 00:07:35.001 sys 0m0.155s 00:07:35.001 17:54:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:35.001 17:54:23 -- common/autotest_common.sh@10 -- # set +x 00:07:35.001 ************************************ 00:07:35.001 END TEST accel_compare 00:07:35.001 ************************************ 00:07:35.001 17:54:23 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:35.001 17:54:23 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:35.001 17:54:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.001 17:54:23 -- common/autotest_common.sh@10 -- # set +x 00:07:35.001 ************************************ 00:07:35.001 START TEST accel_xor 00:07:35.001 ************************************ 00:07:35.001 17:54:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:07:35.001 17:54:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.001 17:54:23 -- accel/accel.sh@17 -- # local accel_module 00:07:35.001 17:54:23 -- accel/accel.sh@19 -- # IFS=: 00:07:35.001 17:54:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:35.001 17:54:23 -- accel/accel.sh@19 -- # read -r var val 00:07:35.001 17:54:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:35.001 17:54:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.001 17:54:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.001 17:54:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.001 17:54:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.001 17:54:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.001 17:54:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.001 17:54:23 -- accel/accel.sh@40 -- # local IFS=, 00:07:35.001 17:54:23 -- accel/accel.sh@41 -- # jq -r . 00:07:35.001 [2024-04-15 17:54:23.776547] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:35.001 [2024-04-15 17:54:23.776622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210872 ] 00:07:35.001 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.001 [2024-04-15 17:54:23.855239] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.001 [2024-04-15 17:54:23.950432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.001 [2024-04-15 17:54:23.951120] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:35.260 17:54:24 -- accel/accel.sh@20 -- # val= 00:07:35.260 17:54:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 17:54:24 -- accel/accel.sh@20 -- # val= 00:07:35.260 17:54:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 17:54:24 -- accel/accel.sh@20 -- # val=0x1 00:07:35.260 17:54:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 17:54:24 -- accel/accel.sh@20 -- # val= 00:07:35.260 17:54:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 17:54:24 -- accel/accel.sh@20 -- # val= 00:07:35.260 17:54:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 17:54:24 -- accel/accel.sh@20 -- # val=xor 00:07:35.260 17:54:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 17:54:24 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 17:54:24 -- accel/accel.sh@20 -- # val=2 00:07:35.260 17:54:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 17:54:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.260 17:54:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 17:54:24 -- accel/accel.sh@20 -- # val= 00:07:35.260 17:54:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 17:54:24 -- accel/accel.sh@20 -- # val=software 00:07:35.260 17:54:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 17:54:24 -- accel/accel.sh@22 -- # accel_module=software 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 17:54:24 -- accel/accel.sh@20 -- # val=32 00:07:35.260 17:54:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 17:54:24 -- accel/accel.sh@20 -- # val=32 00:07:35.260 17:54:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 17:54:24 -- accel/accel.sh@20 -- # val=1 00:07:35.260 17:54:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 17:54:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.260 17:54:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 17:54:24 -- accel/accel.sh@20 -- # val=Yes 00:07:35.260 17:54:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 17:54:24 -- accel/accel.sh@20 -- # val= 00:07:35.260 17:54:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 17:54:24 -- accel/accel.sh@20 -- # val= 00:07:35.260 17:54:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 17:54:24 -- accel/accel.sh@19 -- # read -r var val 00:07:36.634 17:54:25 -- accel/accel.sh@20 -- # val= 00:07:36.634 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.634 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.634 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.634 17:54:25 -- accel/accel.sh@20 -- # val= 00:07:36.634 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.634 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.634 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.634 17:54:25 -- accel/accel.sh@20 -- # val= 00:07:36.634 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.634 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.634 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.634 17:54:25 -- accel/accel.sh@20 -- # val= 00:07:36.634 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.634 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.634 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.634 17:54:25 -- accel/accel.sh@20 -- # val= 00:07:36.634 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.634 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.634 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.634 17:54:25 -- accel/accel.sh@20 -- # val= 00:07:36.634 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.634 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.634 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.634 17:54:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.634 17:54:25 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:36.634 17:54:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.634 00:07:36.634 real 0m1.435s 00:07:36.635 user 0m1.272s 00:07:36.635 sys 0m0.164s 00:07:36.635 17:54:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:36.635 17:54:25 -- common/autotest_common.sh@10 -- # set +x 00:07:36.635 ************************************ 00:07:36.635 END TEST accel_xor 00:07:36.635 ************************************ 00:07:36.635 17:54:25 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:36.635 17:54:25 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:36.635 17:54:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.635 17:54:25 -- common/autotest_common.sh@10 -- # set +x 00:07:36.635 ************************************ 00:07:36.635 START TEST accel_xor 00:07:36.635 ************************************ 00:07:36.635 17:54:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:07:36.635 17:54:25 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.635 17:54:25 -- accel/accel.sh@17 -- # local accel_module 00:07:36.635 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.635 17:54:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:36.635 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.635 17:54:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:36.635 17:54:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.635 17:54:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.635 17:54:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.635 17:54:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.635 17:54:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.635 17:54:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.635 17:54:25 -- accel/accel.sh@40 -- # local IFS=, 00:07:36.635 17:54:25 -- accel/accel.sh@41 -- # jq -r . 00:07:36.635 [2024-04-15 17:54:25.363940] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:36.635 [2024-04-15 17:54:25.364003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3211037 ] 00:07:36.635 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.635 [2024-04-15 17:54:25.434057] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.635 [2024-04-15 17:54:25.526435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.635 [2024-04-15 17:54:25.527171] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:36.635 17:54:25 -- accel/accel.sh@20 -- # val= 00:07:36.917 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.917 17:54:25 -- accel/accel.sh@20 -- # val= 00:07:36.917 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.917 17:54:25 -- accel/accel.sh@20 -- # val=0x1 00:07:36.917 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.917 17:54:25 -- accel/accel.sh@20 -- # val= 00:07:36.917 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.917 17:54:25 -- accel/accel.sh@20 -- # val= 00:07:36.917 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.917 17:54:25 -- accel/accel.sh@20 -- # val=xor 00:07:36.917 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.917 17:54:25 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.917 17:54:25 -- accel/accel.sh@20 -- # val=3 00:07:36.917 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.917 17:54:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.917 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.917 17:54:25 -- accel/accel.sh@20 -- # val= 00:07:36.917 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.917 17:54:25 -- accel/accel.sh@20 -- # val=software 00:07:36.917 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.917 17:54:25 -- accel/accel.sh@22 -- # accel_module=software 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.917 17:54:25 -- accel/accel.sh@20 -- # val=32 00:07:36.917 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.917 17:54:25 -- accel/accel.sh@20 -- # val=32 00:07:36.917 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.917 17:54:25 -- accel/accel.sh@20 -- # val=1 00:07:36.917 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.917 17:54:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.917 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.917 17:54:25 -- accel/accel.sh@20 -- # val=Yes 00:07:36.917 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.917 17:54:25 -- accel/accel.sh@20 -- # val= 00:07:36.917 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:36.917 17:54:25 -- accel/accel.sh@20 -- # val= 00:07:36.917 17:54:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # IFS=: 00:07:36.917 17:54:25 -- accel/accel.sh@19 -- # read -r var val 00:07:37.852 17:54:26 -- accel/accel.sh@20 -- # val= 00:07:37.852 17:54:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.852 17:54:26 -- accel/accel.sh@19 -- # IFS=: 00:07:37.852 17:54:26 -- accel/accel.sh@19 -- # read -r var val 00:07:37.852 17:54:26 -- accel/accel.sh@20 -- # val= 00:07:37.852 17:54:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.852 17:54:26 -- accel/accel.sh@19 -- # IFS=: 00:07:37.852 17:54:26 -- accel/accel.sh@19 -- # read -r var val 00:07:37.852 17:54:26 -- accel/accel.sh@20 -- # val= 00:07:37.852 17:54:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.852 17:54:26 -- accel/accel.sh@19 -- # IFS=: 00:07:37.852 17:54:26 -- accel/accel.sh@19 -- # read -r var val 00:07:37.852 17:54:26 -- accel/accel.sh@20 -- # val= 00:07:37.852 17:54:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.852 17:54:26 -- accel/accel.sh@19 -- # IFS=: 00:07:37.852 17:54:26 -- accel/accel.sh@19 -- # read -r var val 00:07:37.852 17:54:26 -- accel/accel.sh@20 -- # val= 00:07:37.852 17:54:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.852 17:54:26 -- accel/accel.sh@19 -- # IFS=: 00:07:37.852 17:54:26 -- accel/accel.sh@19 -- # read -r var val 00:07:37.852 17:54:26 -- accel/accel.sh@20 -- # val= 00:07:37.852 17:54:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.852 17:54:26 -- accel/accel.sh@19 -- # IFS=: 00:07:37.852 17:54:26 -- accel/accel.sh@19 -- # read -r var val 00:07:37.852 17:54:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.852 17:54:26 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:37.852 17:54:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.852 00:07:37.852 real 0m1.420s 00:07:37.852 user 0m1.274s 00:07:37.852 sys 0m0.148s 00:07:37.852 17:54:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:37.852 17:54:26 -- common/autotest_common.sh@10 -- # set +x 00:07:37.852 ************************************ 00:07:37.852 END TEST accel_xor 00:07:37.852 ************************************ 00:07:37.852 17:54:26 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:37.852 17:54:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:37.852 17:54:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.852 17:54:26 -- common/autotest_common.sh@10 -- # set +x 00:07:38.110 ************************************ 00:07:38.110 START TEST accel_dif_verify 00:07:38.110 ************************************ 00:07:38.110 17:54:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:07:38.110 17:54:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.110 17:54:26 -- accel/accel.sh@17 -- # local accel_module 00:07:38.110 17:54:26 -- accel/accel.sh@19 -- # IFS=: 00:07:38.110 17:54:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:38.110 17:54:26 -- accel/accel.sh@19 -- # read -r var val 00:07:38.110 17:54:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:38.110 17:54:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.110 17:54:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.110 17:54:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.110 17:54:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.110 17:54:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.110 17:54:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.110 17:54:26 -- accel/accel.sh@40 -- # local IFS=, 00:07:38.110 17:54:26 -- accel/accel.sh@41 -- # jq -r . 00:07:38.110 [2024-04-15 17:54:26.905348] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:38.110 [2024-04-15 17:54:26.905412] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3211316 ] 00:07:38.111 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.111 [2024-04-15 17:54:26.971770] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.369 [2024-04-15 17:54:27.066147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.369 [2024-04-15 17:54:27.066821] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:38.369 17:54:27 -- accel/accel.sh@20 -- # val= 00:07:38.369 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:54:27 -- accel/accel.sh@20 -- # val= 00:07:38.369 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:54:27 -- accel/accel.sh@20 -- # val=0x1 00:07:38.369 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:54:27 -- accel/accel.sh@20 -- # val= 00:07:38.369 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:54:27 -- accel/accel.sh@20 -- # val= 00:07:38.369 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:54:27 -- accel/accel.sh@20 -- # val=dif_verify 00:07:38.369 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:54:27 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:54:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.369 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:54:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.369 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:54:27 -- accel/accel.sh@20 -- # val='512 bytes' 00:07:38.369 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:54:27 -- accel/accel.sh@20 -- # val='8 bytes' 00:07:38.369 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:54:27 -- accel/accel.sh@20 -- # val= 00:07:38.369 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:54:27 -- accel/accel.sh@20 -- # val=software 00:07:38.369 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:54:27 -- accel/accel.sh@22 -- # accel_module=software 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:54:27 -- accel/accel.sh@20 -- # val=32 00:07:38.369 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:54:27 -- accel/accel.sh@20 -- # val=32 00:07:38.369 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:54:27 -- accel/accel.sh@20 -- # val=1 00:07:38.369 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:54:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.369 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:54:27 -- accel/accel.sh@20 -- # val=No 00:07:38.369 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.370 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.370 17:54:27 -- accel/accel.sh@20 -- # val= 00:07:38.370 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.370 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.370 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.370 17:54:27 -- accel/accel.sh@20 -- # val= 00:07:38.370 17:54:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.370 17:54:27 -- accel/accel.sh@19 -- # IFS=: 00:07:38.370 17:54:27 -- accel/accel.sh@19 -- # read -r var val 00:07:39.743 17:54:28 -- accel/accel.sh@20 -- # val= 00:07:39.743 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.743 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val= 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val= 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val= 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val= 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val= 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.744 17:54:28 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:39.744 17:54:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.744 00:07:39.744 real 0m1.420s 00:07:39.744 user 0m1.278s 00:07:39.744 sys 0m0.146s 00:07:39.744 17:54:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:39.744 17:54:28 -- common/autotest_common.sh@10 -- # set +x 00:07:39.744 ************************************ 00:07:39.744 END TEST accel_dif_verify 00:07:39.744 ************************************ 00:07:39.744 17:54:28 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:39.744 17:54:28 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:39.744 17:54:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.744 17:54:28 -- common/autotest_common.sh@10 -- # set +x 00:07:39.744 ************************************ 00:07:39.744 START TEST accel_dif_generate 00:07:39.744 ************************************ 00:07:39.744 17:54:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:07:39.744 17:54:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:39.744 17:54:28 -- accel/accel.sh@17 -- # local accel_module 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:39.744 17:54:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.744 17:54:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.744 17:54:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.744 17:54:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.744 17:54:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.744 17:54:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.744 17:54:28 -- accel/accel.sh@40 -- # local IFS=, 00:07:39.744 17:54:28 -- accel/accel.sh@41 -- # jq -r . 00:07:39.744 [2024-04-15 17:54:28.457850] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:39.744 [2024-04-15 17:54:28.457928] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3211482 ] 00:07:39.744 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.744 [2024-04-15 17:54:28.530080] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.744 [2024-04-15 17:54:28.624393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.744 [2024-04-15 17:54:28.625086] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val= 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val= 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val=0x1 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val= 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val= 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val=dif_generate 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val='512 bytes' 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val='8 bytes' 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val= 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val=software 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@22 -- # accel_module=software 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val=32 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val=32 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val=1 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val=No 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val= 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:39.744 17:54:28 -- accel/accel.sh@20 -- # val= 00:07:39.744 17:54:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # IFS=: 00:07:39.744 17:54:28 -- accel/accel.sh@19 -- # read -r var val 00:07:41.118 17:54:29 -- accel/accel.sh@20 -- # val= 00:07:41.118 17:54:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.118 17:54:29 -- accel/accel.sh@19 -- # IFS=: 00:07:41.118 17:54:29 -- accel/accel.sh@19 -- # read -r var val 00:07:41.118 17:54:29 -- accel/accel.sh@20 -- # val= 00:07:41.118 17:54:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.118 17:54:29 -- accel/accel.sh@19 -- # IFS=: 00:07:41.118 17:54:29 -- accel/accel.sh@19 -- # read -r var val 00:07:41.118 17:54:29 -- accel/accel.sh@20 -- # val= 00:07:41.118 17:54:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.118 17:54:29 -- accel/accel.sh@19 -- # IFS=: 00:07:41.118 17:54:29 -- accel/accel.sh@19 -- # read -r var val 00:07:41.118 17:54:29 -- accel/accel.sh@20 -- # val= 00:07:41.118 17:54:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.118 17:54:29 -- accel/accel.sh@19 -- # IFS=: 00:07:41.118 17:54:29 -- accel/accel.sh@19 -- # read -r var val 00:07:41.118 17:54:29 -- accel/accel.sh@20 -- # val= 00:07:41.118 17:54:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.118 17:54:29 -- accel/accel.sh@19 -- # IFS=: 00:07:41.118 17:54:29 -- accel/accel.sh@19 -- # read -r var val 00:07:41.118 17:54:29 -- accel/accel.sh@20 -- # val= 00:07:41.118 17:54:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.118 17:54:29 -- accel/accel.sh@19 -- # IFS=: 00:07:41.118 17:54:29 -- accel/accel.sh@19 -- # read -r var val 00:07:41.118 17:54:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.118 17:54:29 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:41.118 17:54:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.118 00:07:41.118 real 0m1.429s 00:07:41.118 user 0m1.276s 00:07:41.118 sys 0m0.157s 00:07:41.118 17:54:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:41.118 17:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:41.118 ************************************ 00:07:41.118 END TEST accel_dif_generate 00:07:41.118 ************************************ 00:07:41.118 17:54:29 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:41.118 17:54:29 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:41.118 17:54:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.118 17:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:41.118 ************************************ 00:07:41.118 START TEST accel_dif_generate_copy 00:07:41.118 ************************************ 00:07:41.118 17:54:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:07:41.118 17:54:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.118 17:54:30 -- accel/accel.sh@17 -- # local accel_module 00:07:41.118 17:54:30 -- accel/accel.sh@19 -- # IFS=: 00:07:41.118 17:54:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:41.118 17:54:30 -- accel/accel.sh@19 -- # read -r var val 00:07:41.118 17:54:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:41.118 17:54:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.118 17:54:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.118 17:54:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.118 17:54:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.118 17:54:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.118 17:54:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.118 17:54:30 -- accel/accel.sh@40 -- # local IFS=, 00:07:41.118 17:54:30 -- accel/accel.sh@41 -- # jq -r . 00:07:41.118 [2024-04-15 17:54:30.028286] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:41.118 [2024-04-15 17:54:30.028360] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3211649 ] 00:07:41.118 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.395 [2024-04-15 17:54:30.097905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.395 [2024-04-15 17:54:30.194418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.395 [2024-04-15 17:54:30.195113] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:41.395 17:54:30 -- accel/accel.sh@20 -- # val= 00:07:41.395 17:54:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # IFS=: 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # read -r var val 00:07:41.395 17:54:30 -- accel/accel.sh@20 -- # val= 00:07:41.395 17:54:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # IFS=: 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # read -r var val 00:07:41.395 17:54:30 -- accel/accel.sh@20 -- # val=0x1 00:07:41.395 17:54:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # IFS=: 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # read -r var val 00:07:41.395 17:54:30 -- accel/accel.sh@20 -- # val= 00:07:41.395 17:54:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # IFS=: 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # read -r var val 00:07:41.395 17:54:30 -- accel/accel.sh@20 -- # val= 00:07:41.395 17:54:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # IFS=: 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # read -r var val 00:07:41.395 17:54:30 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:41.395 17:54:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.395 17:54:30 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # IFS=: 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # read -r var val 00:07:41.395 17:54:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.395 17:54:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # IFS=: 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # read -r var val 00:07:41.395 17:54:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.395 17:54:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # IFS=: 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # read -r var val 00:07:41.395 17:54:30 -- accel/accel.sh@20 -- # val= 00:07:41.395 17:54:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # IFS=: 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # read -r var val 00:07:41.395 17:54:30 -- accel/accel.sh@20 -- # val=software 00:07:41.395 17:54:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.395 17:54:30 -- accel/accel.sh@22 -- # accel_module=software 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # IFS=: 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # read -r var val 00:07:41.395 17:54:30 -- accel/accel.sh@20 -- # val=32 00:07:41.395 17:54:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # IFS=: 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # read -r var val 00:07:41.395 17:54:30 -- accel/accel.sh@20 -- # val=32 00:07:41.395 17:54:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # IFS=: 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # read -r var val 00:07:41.395 17:54:30 -- accel/accel.sh@20 -- # val=1 00:07:41.395 17:54:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # IFS=: 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # read -r var val 00:07:41.395 17:54:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.395 17:54:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # IFS=: 00:07:41.395 17:54:30 -- accel/accel.sh@19 -- # read -r var val 00:07:41.395 17:54:30 -- accel/accel.sh@20 -- # val=No 00:07:41.396 17:54:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.396 17:54:30 -- accel/accel.sh@19 -- # IFS=: 00:07:41.396 17:54:30 -- accel/accel.sh@19 -- # read -r var val 00:07:41.396 17:54:30 -- accel/accel.sh@20 -- # val= 00:07:41.396 17:54:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.396 17:54:30 -- accel/accel.sh@19 -- # IFS=: 00:07:41.396 17:54:30 -- accel/accel.sh@19 -- # read -r var val 00:07:41.396 17:54:30 -- accel/accel.sh@20 -- # val= 00:07:41.396 17:54:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.396 17:54:30 -- accel/accel.sh@19 -- # IFS=: 00:07:41.396 17:54:30 -- accel/accel.sh@19 -- # read -r var val 00:07:42.794 17:54:31 -- accel/accel.sh@20 -- # val= 00:07:42.794 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.794 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:42.794 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:42.794 17:54:31 -- accel/accel.sh@20 -- # val= 00:07:42.794 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.794 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:42.794 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:42.794 17:54:31 -- accel/accel.sh@20 -- # val= 00:07:42.794 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.794 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:42.794 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:42.794 17:54:31 -- accel/accel.sh@20 -- # val= 00:07:42.794 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.794 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:42.794 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:42.794 17:54:31 -- accel/accel.sh@20 -- # val= 00:07:42.794 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.794 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:42.794 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:42.794 17:54:31 -- accel/accel.sh@20 -- # val= 00:07:42.794 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.794 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:42.794 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:42.794 17:54:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.794 17:54:31 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:42.794 17:54:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.794 00:07:42.794 real 0m1.421s 00:07:42.795 user 0m1.273s 00:07:42.795 sys 0m0.149s 00:07:42.795 17:54:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:42.795 17:54:31 -- common/autotest_common.sh@10 -- # set +x 00:07:42.795 ************************************ 00:07:42.795 END TEST accel_dif_generate_copy 00:07:42.795 ************************************ 00:07:42.795 17:54:31 -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:42.795 17:54:31 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:42.795 17:54:31 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:42.795 17:54:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.795 17:54:31 -- common/autotest_common.sh@10 -- # set +x 00:07:42.795 ************************************ 00:07:42.795 START TEST accel_comp 00:07:42.795 ************************************ 00:07:42.795 17:54:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:42.795 17:54:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:42.795 17:54:31 -- accel/accel.sh@17 -- # local accel_module 00:07:42.795 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:42.795 17:54:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:42.795 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:42.795 17:54:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:42.795 17:54:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.795 17:54:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.795 17:54:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.795 17:54:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.795 17:54:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.795 17:54:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.795 17:54:31 -- accel/accel.sh@40 -- # local IFS=, 00:07:42.795 17:54:31 -- accel/accel.sh@41 -- # jq -r . 00:07:42.795 [2024-04-15 17:54:31.581328] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:42.795 [2024-04-15 17:54:31.581394] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3211930 ] 00:07:42.795 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.795 [2024-04-15 17:54:31.649248] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.795 [2024-04-15 17:54:31.743678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.795 [2024-04-15 17:54:31.744379] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:43.053 17:54:31 -- accel/accel.sh@20 -- # val= 00:07:43.053 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.053 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:43.053 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:43.053 17:54:31 -- accel/accel.sh@20 -- # val= 00:07:43.053 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.053 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:43.053 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:43.053 17:54:31 -- accel/accel.sh@20 -- # val= 00:07:43.053 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:43.054 17:54:31 -- accel/accel.sh@20 -- # val=0x1 00:07:43.054 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:43.054 17:54:31 -- accel/accel.sh@20 -- # val= 00:07:43.054 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:43.054 17:54:31 -- accel/accel.sh@20 -- # val= 00:07:43.054 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:43.054 17:54:31 -- accel/accel.sh@20 -- # val=compress 00:07:43.054 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.054 17:54:31 -- accel/accel.sh@23 -- # accel_opc=compress 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:43.054 17:54:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.054 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:43.054 17:54:31 -- accel/accel.sh@20 -- # val= 00:07:43.054 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:43.054 17:54:31 -- accel/accel.sh@20 -- # val=software 00:07:43.054 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.054 17:54:31 -- accel/accel.sh@22 -- # accel_module=software 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:43.054 17:54:31 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:43.054 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:43.054 17:54:31 -- accel/accel.sh@20 -- # val=32 00:07:43.054 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:43.054 17:54:31 -- accel/accel.sh@20 -- # val=32 00:07:43.054 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:43.054 17:54:31 -- accel/accel.sh@20 -- # val=1 00:07:43.054 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:43.054 17:54:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.054 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:43.054 17:54:31 -- accel/accel.sh@20 -- # val=No 00:07:43.054 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:43.054 17:54:31 -- accel/accel.sh@20 -- # val= 00:07:43.054 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:43.054 17:54:31 -- accel/accel.sh@20 -- # val= 00:07:43.054 17:54:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # IFS=: 00:07:43.054 17:54:31 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:32 -- accel/accel.sh@20 -- # val= 00:07:44.428 17:54:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:32 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:32 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:32 -- accel/accel.sh@20 -- # val= 00:07:44.428 17:54:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:32 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:32 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:32 -- accel/accel.sh@20 -- # val= 00:07:44.428 17:54:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:32 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:32 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:32 -- accel/accel.sh@20 -- # val= 00:07:44.428 17:54:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:32 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:32 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:32 -- accel/accel.sh@20 -- # val= 00:07:44.428 17:54:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:32 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:32 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:32 -- accel/accel.sh@20 -- # val= 00:07:44.428 17:54:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:32 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:32 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.428 17:54:32 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:44.428 17:54:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.428 00:07:44.428 real 0m1.425s 00:07:44.428 user 0m1.279s 00:07:44.428 sys 0m0.149s 00:07:44.428 17:54:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:44.428 17:54:32 -- common/autotest_common.sh@10 -- # set +x 00:07:44.428 ************************************ 00:07:44.428 END TEST accel_comp 00:07:44.428 ************************************ 00:07:44.428 17:54:33 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:44.428 17:54:33 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:44.428 17:54:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.428 17:54:33 -- common/autotest_common.sh@10 -- # set +x 00:07:44.428 ************************************ 00:07:44.428 START TEST accel_decomp 00:07:44.428 ************************************ 00:07:44.428 17:54:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:44.428 17:54:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.428 17:54:33 -- accel/accel.sh@17 -- # local accel_module 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:44.428 17:54:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.428 17:54:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.428 17:54:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.428 17:54:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.428 17:54:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.428 17:54:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.428 17:54:33 -- accel/accel.sh@40 -- # local IFS=, 00:07:44.428 17:54:33 -- accel/accel.sh@41 -- # jq -r . 00:07:44.428 [2024-04-15 17:54:33.134592] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:44.428 [2024-04-15 17:54:33.134657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3212094 ] 00:07:44.428 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.428 [2024-04-15 17:54:33.210980] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.428 [2024-04-15 17:54:33.304894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.428 [2024-04-15 17:54:33.305596] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:44.428 17:54:33 -- accel/accel.sh@20 -- # val= 00:07:44.428 17:54:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:33 -- accel/accel.sh@20 -- # val= 00:07:44.428 17:54:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:33 -- accel/accel.sh@20 -- # val= 00:07:44.428 17:54:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:33 -- accel/accel.sh@20 -- # val=0x1 00:07:44.428 17:54:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:33 -- accel/accel.sh@20 -- # val= 00:07:44.428 17:54:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:33 -- accel/accel.sh@20 -- # val= 00:07:44.428 17:54:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:33 -- accel/accel.sh@20 -- # val=decompress 00:07:44.428 17:54:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:33 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.428 17:54:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:33 -- accel/accel.sh@20 -- # val= 00:07:44.428 17:54:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:33 -- accel/accel.sh@20 -- # val=software 00:07:44.428 17:54:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:33 -- accel/accel.sh@22 -- # accel_module=software 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:33 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.428 17:54:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:33 -- accel/accel.sh@20 -- # val=32 00:07:44.428 17:54:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:33 -- accel/accel.sh@20 -- # val=32 00:07:44.428 17:54:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:33 -- accel/accel.sh@20 -- # val=1 00:07:44.428 17:54:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.428 17:54:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:33 -- accel/accel.sh@20 -- # val=Yes 00:07:44.428 17:54:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:33 -- accel/accel.sh@20 -- # val= 00:07:44.428 17:54:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:44.428 17:54:33 -- accel/accel.sh@20 -- # val= 00:07:44.428 17:54:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # IFS=: 00:07:44.428 17:54:33 -- accel/accel.sh@19 -- # read -r var val 00:07:45.802 17:54:34 -- accel/accel.sh@20 -- # val= 00:07:45.802 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.802 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:45.802 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:45.802 17:54:34 -- accel/accel.sh@20 -- # val= 00:07:45.802 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.802 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:45.802 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:45.802 17:54:34 -- accel/accel.sh@20 -- # val= 00:07:45.802 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.802 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:45.802 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:45.802 17:54:34 -- accel/accel.sh@20 -- # val= 00:07:45.802 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.802 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:45.802 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:45.802 17:54:34 -- accel/accel.sh@20 -- # val= 00:07:45.802 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.802 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:45.802 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:45.802 17:54:34 -- accel/accel.sh@20 -- # val= 00:07:45.802 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.802 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:45.802 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:45.802 17:54:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.802 17:54:34 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:45.802 17:54:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.802 00:07:45.802 real 0m1.434s 00:07:45.802 user 0m1.272s 00:07:45.802 sys 0m0.165s 00:07:45.802 17:54:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:45.802 17:54:34 -- common/autotest_common.sh@10 -- # set +x 00:07:45.802 ************************************ 00:07:45.802 END TEST accel_decomp 00:07:45.802 ************************************ 00:07:45.802 17:54:34 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:45.802 17:54:34 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:45.802 17:54:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.802 17:54:34 -- common/autotest_common.sh@10 -- # set +x 00:07:45.802 ************************************ 00:07:45.802 START TEST accel_decmop_full 00:07:45.802 ************************************ 00:07:45.802 17:54:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:45.802 17:54:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.802 17:54:34 -- accel/accel.sh@17 -- # local accel_module 00:07:45.802 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:45.802 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:45.802 17:54:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:45.802 17:54:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:45.802 17:54:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.802 17:54:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.802 17:54:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.802 17:54:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.802 17:54:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.802 17:54:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.802 17:54:34 -- accel/accel.sh@40 -- # local IFS=, 00:07:45.802 17:54:34 -- accel/accel.sh@41 -- # jq -r . 00:07:45.802 [2024-04-15 17:54:34.704757] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:45.802 [2024-04-15 17:54:34.704855] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3212255 ] 00:07:45.802 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.060 [2024-04-15 17:54:34.777954] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.060 [2024-04-15 17:54:34.871839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.060 [2024-04-15 17:54:34.872538] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:46.060 17:54:34 -- accel/accel.sh@20 -- # val= 00:07:46.060 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:46.060 17:54:34 -- accel/accel.sh@20 -- # val= 00:07:46.060 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:46.060 17:54:34 -- accel/accel.sh@20 -- # val= 00:07:46.060 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:46.060 17:54:34 -- accel/accel.sh@20 -- # val=0x1 00:07:46.060 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:46.060 17:54:34 -- accel/accel.sh@20 -- # val= 00:07:46.060 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:46.060 17:54:34 -- accel/accel.sh@20 -- # val= 00:07:46.060 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:46.060 17:54:34 -- accel/accel.sh@20 -- # val=decompress 00:07:46.060 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.060 17:54:34 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:46.060 17:54:34 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:46.060 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:46.060 17:54:34 -- accel/accel.sh@20 -- # val= 00:07:46.060 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:46.060 17:54:34 -- accel/accel.sh@20 -- # val=software 00:07:46.060 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.060 17:54:34 -- accel/accel.sh@22 -- # accel_module=software 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:46.060 17:54:34 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.060 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:46.060 17:54:34 -- accel/accel.sh@20 -- # val=32 00:07:46.060 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:46.060 17:54:34 -- accel/accel.sh@20 -- # val=32 00:07:46.060 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:46.060 17:54:34 -- accel/accel.sh@20 -- # val=1 00:07:46.060 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:46.060 17:54:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.060 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:46.060 17:54:34 -- accel/accel.sh@20 -- # val=Yes 00:07:46.060 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:46.060 17:54:34 -- accel/accel.sh@20 -- # val= 00:07:46.060 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:46.060 17:54:34 -- accel/accel.sh@20 -- # val= 00:07:46.060 17:54:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # IFS=: 00:07:46.060 17:54:34 -- accel/accel.sh@19 -- # read -r var val 00:07:47.433 17:54:36 -- accel/accel.sh@20 -- # val= 00:07:47.433 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.433 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.433 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.433 17:54:36 -- accel/accel.sh@20 -- # val= 00:07:47.433 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.433 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.433 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.433 17:54:36 -- accel/accel.sh@20 -- # val= 00:07:47.433 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.433 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.433 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.433 17:54:36 -- accel/accel.sh@20 -- # val= 00:07:47.433 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.433 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.433 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.433 17:54:36 -- accel/accel.sh@20 -- # val= 00:07:47.433 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.433 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.433 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.433 17:54:36 -- accel/accel.sh@20 -- # val= 00:07:47.433 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.433 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.433 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.433 17:54:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.433 17:54:36 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:47.433 17:54:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.433 00:07:47.433 real 0m1.444s 00:07:47.433 user 0m1.284s 00:07:47.433 sys 0m0.163s 00:07:47.433 17:54:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:47.433 17:54:36 -- common/autotest_common.sh@10 -- # set +x 00:07:47.433 ************************************ 00:07:47.433 END TEST accel_decmop_full 00:07:47.433 ************************************ 00:07:47.433 17:54:36 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:47.433 17:54:36 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:47.433 17:54:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.433 17:54:36 -- common/autotest_common.sh@10 -- # set +x 00:07:47.433 ************************************ 00:07:47.433 START TEST accel_decomp_mcore 00:07:47.433 ************************************ 00:07:47.433 17:54:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:47.433 17:54:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.433 17:54:36 -- accel/accel.sh@17 -- # local accel_module 00:07:47.433 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.433 17:54:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:47.433 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.433 17:54:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:47.433 17:54:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.433 17:54:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.433 17:54:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.433 17:54:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.433 17:54:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.433 17:54:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.433 17:54:36 -- accel/accel.sh@40 -- # local IFS=, 00:07:47.433 17:54:36 -- accel/accel.sh@41 -- # jq -r . 00:07:47.433 [2024-04-15 17:54:36.269653] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:47.433 [2024-04-15 17:54:36.269718] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3212538 ] 00:07:47.433 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.433 [2024-04-15 17:54:36.337775] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.692 [2024-04-15 17:54:36.435415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.692 [2024-04-15 17:54:36.435468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.692 [2024-04-15 17:54:36.435517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.692 [2024-04-15 17:54:36.435521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.692 [2024-04-15 17:54:36.436345] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:47.692 17:54:36 -- accel/accel.sh@20 -- # val= 00:07:47.692 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.692 17:54:36 -- accel/accel.sh@20 -- # val= 00:07:47.692 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.692 17:54:36 -- accel/accel.sh@20 -- # val= 00:07:47.692 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.692 17:54:36 -- accel/accel.sh@20 -- # val=0xf 00:07:47.692 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.692 17:54:36 -- accel/accel.sh@20 -- # val= 00:07:47.692 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.692 17:54:36 -- accel/accel.sh@20 -- # val= 00:07:47.692 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.692 17:54:36 -- accel/accel.sh@20 -- # val=decompress 00:07:47.692 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.692 17:54:36 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.692 17:54:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.692 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.692 17:54:36 -- accel/accel.sh@20 -- # val= 00:07:47.692 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.692 17:54:36 -- accel/accel.sh@20 -- # val=software 00:07:47.692 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.692 17:54:36 -- accel/accel.sh@22 -- # accel_module=software 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.692 17:54:36 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:47.692 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.692 17:54:36 -- accel/accel.sh@20 -- # val=32 00:07:47.692 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.692 17:54:36 -- accel/accel.sh@20 -- # val=32 00:07:47.692 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.692 17:54:36 -- accel/accel.sh@20 -- # val=1 00:07:47.692 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.692 17:54:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.692 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.692 17:54:36 -- accel/accel.sh@20 -- # val=Yes 00:07:47.692 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.692 17:54:36 -- accel/accel.sh@20 -- # val= 00:07:47.692 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:47.692 17:54:36 -- accel/accel.sh@20 -- # val= 00:07:47.692 17:54:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # IFS=: 00:07:47.692 17:54:36 -- accel/accel.sh@19 -- # read -r var val 00:07:49.065 17:54:37 -- accel/accel.sh@20 -- # val= 00:07:49.065 17:54:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.065 17:54:37 -- accel/accel.sh@19 -- # IFS=: 00:07:49.065 17:54:37 -- accel/accel.sh@19 -- # read -r var val 00:07:49.065 17:54:37 -- accel/accel.sh@20 -- # val= 00:07:49.065 17:54:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.065 17:54:37 -- accel/accel.sh@19 -- # IFS=: 00:07:49.065 17:54:37 -- accel/accel.sh@19 -- # read -r var val 00:07:49.065 17:54:37 -- accel/accel.sh@20 -- # val= 00:07:49.065 17:54:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.065 17:54:37 -- accel/accel.sh@19 -- # IFS=: 00:07:49.065 17:54:37 -- accel/accel.sh@19 -- # read -r var val 00:07:49.065 17:54:37 -- accel/accel.sh@20 -- # val= 00:07:49.065 17:54:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.065 17:54:37 -- accel/accel.sh@19 -- # IFS=: 00:07:49.066 17:54:37 -- accel/accel.sh@19 -- # read -r var val 00:07:49.066 17:54:37 -- accel/accel.sh@20 -- # val= 00:07:49.066 17:54:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.066 17:54:37 -- accel/accel.sh@19 -- # IFS=: 00:07:49.066 17:54:37 -- accel/accel.sh@19 -- # read -r var val 00:07:49.066 17:54:37 -- accel/accel.sh@20 -- # val= 00:07:49.066 17:54:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.066 17:54:37 -- accel/accel.sh@19 -- # IFS=: 00:07:49.066 17:54:37 -- accel/accel.sh@19 -- # read -r var val 00:07:49.066 17:54:37 -- accel/accel.sh@20 -- # val= 00:07:49.066 17:54:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.066 17:54:37 -- accel/accel.sh@19 -- # IFS=: 00:07:49.066 17:54:37 -- accel/accel.sh@19 -- # read -r var val 00:07:49.066 17:54:37 -- accel/accel.sh@20 -- # val= 00:07:49.066 17:54:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.066 17:54:37 -- accel/accel.sh@19 -- # IFS=: 00:07:49.066 17:54:37 -- accel/accel.sh@19 -- # read -r var val 00:07:49.066 17:54:37 -- accel/accel.sh@20 -- # val= 00:07:49.066 17:54:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.066 17:54:37 -- accel/accel.sh@19 -- # IFS=: 00:07:49.066 17:54:37 -- accel/accel.sh@19 -- # read -r var val 00:07:49.066 17:54:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.066 17:54:37 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:49.066 17:54:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.066 00:07:49.066 real 0m1.428s 00:07:49.066 user 0m4.731s 00:07:49.066 sys 0m0.163s 00:07:49.066 17:54:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:49.066 17:54:37 -- common/autotest_common.sh@10 -- # set +x 00:07:49.066 ************************************ 00:07:49.066 END TEST accel_decomp_mcore 00:07:49.066 ************************************ 00:07:49.066 17:54:37 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.066 17:54:37 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:49.066 17:54:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.066 17:54:37 -- common/autotest_common.sh@10 -- # set +x 00:07:49.066 ************************************ 00:07:49.066 START TEST accel_decomp_full_mcore 00:07:49.066 ************************************ 00:07:49.066 17:54:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.066 17:54:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.066 17:54:37 -- accel/accel.sh@17 -- # local accel_module 00:07:49.066 17:54:37 -- accel/accel.sh@19 -- # IFS=: 00:07:49.066 17:54:37 -- accel/accel.sh@19 -- # read -r var val 00:07:49.066 17:54:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.066 17:54:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.066 17:54:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.066 17:54:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.066 17:54:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.066 17:54:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.066 17:54:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.066 17:54:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.066 17:54:37 -- accel/accel.sh@40 -- # local IFS=, 00:07:49.066 17:54:37 -- accel/accel.sh@41 -- # jq -r . 00:07:49.066 [2024-04-15 17:54:37.837702] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:49.066 [2024-04-15 17:54:37.837769] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3212703 ] 00:07:49.066 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.066 [2024-04-15 17:54:37.905773] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.066 [2024-04-15 17:54:38.002934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.066 [2024-04-15 17:54:38.002987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.066 [2024-04-15 17:54:38.003041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.066 [2024-04-15 17:54:38.003044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.066 [2024-04-15 17:54:38.003829] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:49.325 17:54:38 -- accel/accel.sh@20 -- # val= 00:07:49.325 17:54:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # IFS=: 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # read -r var val 00:07:49.325 17:54:38 -- accel/accel.sh@20 -- # val= 00:07:49.325 17:54:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # IFS=: 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # read -r var val 00:07:49.325 17:54:38 -- accel/accel.sh@20 -- # val= 00:07:49.325 17:54:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # IFS=: 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # read -r var val 00:07:49.325 17:54:38 -- accel/accel.sh@20 -- # val=0xf 00:07:49.325 17:54:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # IFS=: 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # read -r var val 00:07:49.325 17:54:38 -- accel/accel.sh@20 -- # val= 00:07:49.325 17:54:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # IFS=: 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # read -r var val 00:07:49.325 17:54:38 -- accel/accel.sh@20 -- # val= 00:07:49.325 17:54:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # IFS=: 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # read -r var val 00:07:49.325 17:54:38 -- accel/accel.sh@20 -- # val=decompress 00:07:49.325 17:54:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.325 17:54:38 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # IFS=: 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # read -r var val 00:07:49.325 17:54:38 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:49.325 17:54:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # IFS=: 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # read -r var val 00:07:49.325 17:54:38 -- accel/accel.sh@20 -- # val= 00:07:49.325 17:54:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # IFS=: 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # read -r var val 00:07:49.325 17:54:38 -- accel/accel.sh@20 -- # val=software 00:07:49.325 17:54:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.325 17:54:38 -- accel/accel.sh@22 -- # accel_module=software 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # IFS=: 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # read -r var val 00:07:49.325 17:54:38 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:49.325 17:54:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # IFS=: 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # read -r var val 00:07:49.325 17:54:38 -- accel/accel.sh@20 -- # val=32 00:07:49.325 17:54:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # IFS=: 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # read -r var val 00:07:49.325 17:54:38 -- accel/accel.sh@20 -- # val=32 00:07:49.325 17:54:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # IFS=: 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # read -r var val 00:07:49.325 17:54:38 -- accel/accel.sh@20 -- # val=1 00:07:49.325 17:54:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # IFS=: 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # read -r var val 00:07:49.325 17:54:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.325 17:54:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # IFS=: 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # read -r var val 00:07:49.325 17:54:38 -- accel/accel.sh@20 -- # val=Yes 00:07:49.325 17:54:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # IFS=: 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # read -r var val 00:07:49.325 17:54:38 -- accel/accel.sh@20 -- # val= 00:07:49.325 17:54:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # IFS=: 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # read -r var val 00:07:49.325 17:54:38 -- accel/accel.sh@20 -- # val= 00:07:49.325 17:54:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # IFS=: 00:07:49.325 17:54:38 -- accel/accel.sh@19 -- # read -r var val 00:07:50.698 17:54:39 -- accel/accel.sh@20 -- # val= 00:07:50.698 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.698 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.698 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.698 17:54:39 -- accel/accel.sh@20 -- # val= 00:07:50.698 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.698 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.698 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.698 17:54:39 -- accel/accel.sh@20 -- # val= 00:07:50.698 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.698 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.698 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.698 17:54:39 -- accel/accel.sh@20 -- # val= 00:07:50.698 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.698 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.698 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.698 17:54:39 -- accel/accel.sh@20 -- # val= 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val= 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val= 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val= 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val= 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.699 17:54:39 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:50.699 17:54:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.699 00:07:50.699 real 0m1.440s 00:07:50.699 user 0m4.779s 00:07:50.699 sys 0m0.159s 00:07:50.699 17:54:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:50.699 17:54:39 -- common/autotest_common.sh@10 -- # set +x 00:07:50.699 ************************************ 00:07:50.699 END TEST accel_decomp_full_mcore 00:07:50.699 ************************************ 00:07:50.699 17:54:39 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:50.699 17:54:39 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:50.699 17:54:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.699 17:54:39 -- common/autotest_common.sh@10 -- # set +x 00:07:50.699 ************************************ 00:07:50.699 START TEST accel_decomp_mthread 00:07:50.699 ************************************ 00:07:50.699 17:54:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:50.699 17:54:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:50.699 17:54:39 -- accel/accel.sh@17 -- # local accel_module 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:50.699 17:54:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:50.699 17:54:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.699 17:54:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.699 17:54:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.699 17:54:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.699 17:54:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.699 17:54:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.699 17:54:39 -- accel/accel.sh@40 -- # local IFS=, 00:07:50.699 17:54:39 -- accel/accel.sh@41 -- # jq -r . 00:07:50.699 [2024-04-15 17:54:39.409123] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:50.699 [2024-04-15 17:54:39.409186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3212954 ] 00:07:50.699 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.699 [2024-04-15 17:54:39.476335] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.699 [2024-04-15 17:54:39.570696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.699 [2024-04-15 17:54:39.571394] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val= 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val= 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val= 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val=0x1 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val= 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val= 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val=decompress 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val= 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val=software 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@22 -- # accel_module=software 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val=32 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val=32 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val=2 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val=Yes 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val= 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:50.699 17:54:39 -- accel/accel.sh@20 -- # val= 00:07:50.699 17:54:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # IFS=: 00:07:50.699 17:54:39 -- accel/accel.sh@19 -- # read -r var val 00:07:52.072 17:54:40 -- accel/accel.sh@20 -- # val= 00:07:52.072 17:54:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.072 17:54:40 -- accel/accel.sh@19 -- # IFS=: 00:07:52.072 17:54:40 -- accel/accel.sh@19 -- # read -r var val 00:07:52.072 17:54:40 -- accel/accel.sh@20 -- # val= 00:07:52.072 17:54:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.072 17:54:40 -- accel/accel.sh@19 -- # IFS=: 00:07:52.072 17:54:40 -- accel/accel.sh@19 -- # read -r var val 00:07:52.072 17:54:40 -- accel/accel.sh@20 -- # val= 00:07:52.072 17:54:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.072 17:54:40 -- accel/accel.sh@19 -- # IFS=: 00:07:52.072 17:54:40 -- accel/accel.sh@19 -- # read -r var val 00:07:52.072 17:54:40 -- accel/accel.sh@20 -- # val= 00:07:52.072 17:54:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.072 17:54:40 -- accel/accel.sh@19 -- # IFS=: 00:07:52.072 17:54:40 -- accel/accel.sh@19 -- # read -r var val 00:07:52.072 17:54:40 -- accel/accel.sh@20 -- # val= 00:07:52.072 17:54:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.072 17:54:40 -- accel/accel.sh@19 -- # IFS=: 00:07:52.072 17:54:40 -- accel/accel.sh@19 -- # read -r var val 00:07:52.072 17:54:40 -- accel/accel.sh@20 -- # val= 00:07:52.072 17:54:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.072 17:54:40 -- accel/accel.sh@19 -- # IFS=: 00:07:52.072 17:54:40 -- accel/accel.sh@19 -- # read -r var val 00:07:52.072 17:54:40 -- accel/accel.sh@20 -- # val= 00:07:52.072 17:54:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.072 17:54:40 -- accel/accel.sh@19 -- # IFS=: 00:07:52.072 17:54:40 -- accel/accel.sh@19 -- # read -r var val 00:07:52.072 17:54:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.072 17:54:40 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:52.072 17:54:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.072 00:07:52.072 real 0m1.429s 00:07:52.072 user 0m1.275s 00:07:52.072 sys 0m0.156s 00:07:52.072 17:54:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:52.072 17:54:40 -- common/autotest_common.sh@10 -- # set +x 00:07:52.072 ************************************ 00:07:52.072 END TEST accel_decomp_mthread 00:07:52.072 ************************************ 00:07:52.072 17:54:40 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.072 17:54:40 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:52.072 17:54:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:52.072 17:54:40 -- common/autotest_common.sh@10 -- # set +x 00:07:52.072 ************************************ 00:07:52.072 START TEST accel_deomp_full_mthread 00:07:52.072 ************************************ 00:07:52.072 17:54:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.072 17:54:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.072 17:54:40 -- accel/accel.sh@17 -- # local accel_module 00:07:52.072 17:54:40 -- accel/accel.sh@19 -- # IFS=: 00:07:52.072 17:54:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.072 17:54:40 -- accel/accel.sh@19 -- # read -r var val 00:07:52.072 17:54:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.072 17:54:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.072 17:54:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.072 17:54:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.072 17:54:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.072 17:54:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.072 17:54:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.073 17:54:40 -- accel/accel.sh@40 -- # local IFS=, 00:07:52.073 17:54:40 -- accel/accel.sh@41 -- # jq -r . 00:07:52.073 [2024-04-15 17:54:40.956729] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:52.073 [2024-04-15 17:54:40.956794] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3213150 ] 00:07:52.073 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.073 [2024-04-15 17:54:41.024258] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.331 [2024-04-15 17:54:41.118757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.331 [2024-04-15 17:54:41.119470] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:52.331 17:54:41 -- accel/accel.sh@20 -- # val= 00:07:52.331 17:54:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # IFS=: 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # read -r var val 00:07:52.331 17:54:41 -- accel/accel.sh@20 -- # val= 00:07:52.331 17:54:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # IFS=: 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # read -r var val 00:07:52.331 17:54:41 -- accel/accel.sh@20 -- # val= 00:07:52.331 17:54:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # IFS=: 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # read -r var val 00:07:52.331 17:54:41 -- accel/accel.sh@20 -- # val=0x1 00:07:52.331 17:54:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # IFS=: 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # read -r var val 00:07:52.331 17:54:41 -- accel/accel.sh@20 -- # val= 00:07:52.331 17:54:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # IFS=: 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # read -r var val 00:07:52.331 17:54:41 -- accel/accel.sh@20 -- # val= 00:07:52.331 17:54:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # IFS=: 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # read -r var val 00:07:52.331 17:54:41 -- accel/accel.sh@20 -- # val=decompress 00:07:52.331 17:54:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.331 17:54:41 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # IFS=: 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # read -r var val 00:07:52.331 17:54:41 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:52.331 17:54:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # IFS=: 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # read -r var val 00:07:52.331 17:54:41 -- accel/accel.sh@20 -- # val= 00:07:52.331 17:54:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # IFS=: 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # read -r var val 00:07:52.331 17:54:41 -- accel/accel.sh@20 -- # val=software 00:07:52.331 17:54:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.331 17:54:41 -- accel/accel.sh@22 -- # accel_module=software 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # IFS=: 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # read -r var val 00:07:52.331 17:54:41 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:52.331 17:54:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # IFS=: 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # read -r var val 00:07:52.331 17:54:41 -- accel/accel.sh@20 -- # val=32 00:07:52.331 17:54:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # IFS=: 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # read -r var val 00:07:52.331 17:54:41 -- accel/accel.sh@20 -- # val=32 00:07:52.331 17:54:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # IFS=: 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # read -r var val 00:07:52.331 17:54:41 -- accel/accel.sh@20 -- # val=2 00:07:52.331 17:54:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # IFS=: 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # read -r var val 00:07:52.331 17:54:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.331 17:54:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # IFS=: 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # read -r var val 00:07:52.331 17:54:41 -- accel/accel.sh@20 -- # val=Yes 00:07:52.331 17:54:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # IFS=: 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # read -r var val 00:07:52.331 17:54:41 -- accel/accel.sh@20 -- # val= 00:07:52.331 17:54:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # IFS=: 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # read -r var val 00:07:52.331 17:54:41 -- accel/accel.sh@20 -- # val= 00:07:52.331 17:54:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # IFS=: 00:07:52.331 17:54:41 -- accel/accel.sh@19 -- # read -r var val 00:07:53.703 17:54:42 -- accel/accel.sh@20 -- # val= 00:07:53.703 17:54:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.703 17:54:42 -- accel/accel.sh@19 -- # IFS=: 00:07:53.703 17:54:42 -- accel/accel.sh@19 -- # read -r var val 00:07:53.703 17:54:42 -- accel/accel.sh@20 -- # val= 00:07:53.703 17:54:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.703 17:54:42 -- accel/accel.sh@19 -- # IFS=: 00:07:53.703 17:54:42 -- accel/accel.sh@19 -- # read -r var val 00:07:53.703 17:54:42 -- accel/accel.sh@20 -- # val= 00:07:53.703 17:54:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.703 17:54:42 -- accel/accel.sh@19 -- # IFS=: 00:07:53.703 17:54:42 -- accel/accel.sh@19 -- # read -r var val 00:07:53.703 17:54:42 -- accel/accel.sh@20 -- # val= 00:07:53.703 17:54:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.703 17:54:42 -- accel/accel.sh@19 -- # IFS=: 00:07:53.703 17:54:42 -- accel/accel.sh@19 -- # read -r var val 00:07:53.703 17:54:42 -- accel/accel.sh@20 -- # val= 00:07:53.703 17:54:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.703 17:54:42 -- accel/accel.sh@19 -- # IFS=: 00:07:53.703 17:54:42 -- accel/accel.sh@19 -- # read -r var val 00:07:53.703 17:54:42 -- accel/accel.sh@20 -- # val= 00:07:53.703 17:54:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.703 17:54:42 -- accel/accel.sh@19 -- # IFS=: 00:07:53.703 17:54:42 -- accel/accel.sh@19 -- # read -r var val 00:07:53.703 17:54:42 -- accel/accel.sh@20 -- # val= 00:07:53.703 17:54:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.703 17:54:42 -- accel/accel.sh@19 -- # IFS=: 00:07:53.703 17:54:42 -- accel/accel.sh@19 -- # read -r var val 00:07:53.703 17:54:42 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.703 17:54:42 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:53.703 17:54:42 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.703 00:07:53.703 real 0m1.460s 00:07:53.703 user 0m1.316s 00:07:53.703 sys 0m0.146s 00:07:53.703 17:54:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:53.703 17:54:42 -- common/autotest_common.sh@10 -- # set +x 00:07:53.703 ************************************ 00:07:53.703 END TEST accel_deomp_full_mthread 00:07:53.703 ************************************ 00:07:53.703 17:54:42 -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:53.703 17:54:42 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:53.703 17:54:42 -- accel/accel.sh@137 -- # build_accel_config 00:07:53.703 17:54:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.703 17:54:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:53.703 17:54:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.703 17:54:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.703 17:54:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.703 17:54:42 -- common/autotest_common.sh@10 -- # set +x 00:07:53.703 17:54:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.703 17:54:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.703 17:54:42 -- accel/accel.sh@40 -- # local IFS=, 00:07:53.703 17:54:42 -- accel/accel.sh@41 -- # jq -r . 00:07:53.703 ************************************ 00:07:53.703 START TEST accel_dif_functional_tests 00:07:53.703 ************************************ 00:07:53.703 17:54:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:53.703 [2024-04-15 17:54:42.557886] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:53.703 [2024-04-15 17:54:42.557960] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3213317 ] 00:07:53.703 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.703 [2024-04-15 17:54:42.626334] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.960 [2024-04-15 17:54:42.725383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.960 [2024-04-15 17:54:42.725438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.960 [2024-04-15 17:54:42.725441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.960 [2024-04-15 17:54:42.726178] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:53.960 00:07:53.960 00:07:53.960 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.960 http://cunit.sourceforge.net/ 00:07:53.960 00:07:53.960 00:07:53.960 Suite: accel_dif 00:07:53.961 Test: verify: DIF generated, GUARD check ...passed 00:07:53.961 Test: verify: DIF generated, APPTAG check ...passed 00:07:53.961 Test: verify: DIF generated, REFTAG check ...passed 00:07:53.961 Test: verify: DIF not generated, GUARD check ...[2024-04-15 17:54:42.821986] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:53.961 [2024-04-15 17:54:42.822053] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:53.961 passed 00:07:53.961 Test: verify: DIF not generated, APPTAG check ...[2024-04-15 17:54:42.822128] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:53.961 [2024-04-15 17:54:42.822169] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:53.961 passed 00:07:53.961 Test: verify: DIF not generated, REFTAG check ...[2024-04-15 17:54:42.822206] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:53.961 [2024-04-15 17:54:42.822238] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:53.961 passed 00:07:53.961 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:53.961 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-15 17:54:42.822309] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:53.961 passed 00:07:53.961 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:53.961 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:53.961 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:53.961 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-15 17:54:42.822464] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:53.961 passed 00:07:53.961 Test: generate copy: DIF generated, GUARD check ...passed 00:07:53.961 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:53.961 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:53.961 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:53.961 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:53.961 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:53.961 Test: generate copy: iovecs-len validate ...[2024-04-15 17:54:42.822718] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:53.961 passed 00:07:53.961 Test: generate copy: buffer alignment validate ...passed 00:07:53.961 00:07:53.961 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.961 suites 1 1 n/a 0 0 00:07:53.961 tests 20 20 20 0 0 00:07:53.961 asserts 204 204 204 0 n/a 00:07:53.961 00:07:53.961 Elapsed time = 0.003 seconds 00:07:54.223 00:07:54.223 real 0m0.529s 00:07:54.223 user 0m0.787s 00:07:54.223 sys 0m0.183s 00:07:54.223 17:54:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:54.223 17:54:43 -- common/autotest_common.sh@10 -- # set +x 00:07:54.223 ************************************ 00:07:54.223 END TEST accel_dif_functional_tests 00:07:54.223 ************************************ 00:07:54.223 00:07:54.223 real 0m34.564s 00:07:54.223 user 0m36.506s 00:07:54.223 sys 0m5.986s 00:07:54.223 17:54:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:54.223 17:54:43 -- common/autotest_common.sh@10 -- # set +x 00:07:54.223 ************************************ 00:07:54.223 END TEST accel 00:07:54.223 ************************************ 00:07:54.223 17:54:43 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:54.223 17:54:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:54.223 17:54:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.223 17:54:43 -- common/autotest_common.sh@10 -- # set +x 00:07:54.526 ************************************ 00:07:54.526 START TEST accel_rpc 00:07:54.526 ************************************ 00:07:54.526 17:54:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:54.526 * Looking for test storage... 00:07:54.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:54.526 17:54:43 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:54.526 17:54:43 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3213513 00:07:54.526 17:54:43 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:54.526 17:54:43 -- accel/accel_rpc.sh@15 -- # waitforlisten 3213513 00:07:54.526 17:54:43 -- common/autotest_common.sh@817 -- # '[' -z 3213513 ']' 00:07:54.526 17:54:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.526 17:54:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:54.526 17:54:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.526 17:54:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:54.526 17:54:43 -- common/autotest_common.sh@10 -- # set +x 00:07:54.526 [2024-04-15 17:54:43.371450] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:54.526 [2024-04-15 17:54:43.371653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3213513 ] 00:07:54.526 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.786 [2024-04-15 17:54:43.473539] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.786 [2024-04-15 17:54:43.570588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.721 17:54:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:55.721 17:54:44 -- common/autotest_common.sh@850 -- # return 0 00:07:55.721 17:54:44 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:55.721 17:54:44 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:55.721 17:54:44 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:55.721 17:54:44 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:55.721 17:54:44 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:55.721 17:54:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:55.721 17:54:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.721 17:54:44 -- common/autotest_common.sh@10 -- # set +x 00:07:55.979 ************************************ 00:07:55.979 START TEST accel_assign_opcode 00:07:55.979 ************************************ 00:07:55.979 17:54:44 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:07:55.979 17:54:44 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:55.979 17:54:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.979 17:54:44 -- common/autotest_common.sh@10 -- # set +x 00:07:55.979 [2024-04-15 17:54:44.734081] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:55.979 17:54:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.979 17:54:44 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:55.979 17:54:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.979 17:54:44 -- common/autotest_common.sh@10 -- # set +x 00:07:55.979 [2024-04-15 17:54:44.742066] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:55.979 17:54:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.979 17:54:44 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:55.979 17:54:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.979 17:54:44 -- common/autotest_common.sh@10 -- # set +x 00:07:56.237 17:54:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.237 17:54:44 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:56.237 17:54:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.237 17:54:44 -- common/autotest_common.sh@10 -- # set +x 00:07:56.237 17:54:44 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:56.237 17:54:44 -- accel/accel_rpc.sh@42 -- # grep software 00:07:56.237 17:54:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.237 software 00:07:56.237 00:07:56.237 real 0m0.307s 00:07:56.237 user 0m0.048s 00:07:56.237 sys 0m0.004s 00:07:56.237 17:54:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:56.237 17:54:45 -- common/autotest_common.sh@10 -- # set +x 00:07:56.238 ************************************ 00:07:56.238 END TEST accel_assign_opcode 00:07:56.238 ************************************ 00:07:56.238 17:54:45 -- accel/accel_rpc.sh@55 -- # killprocess 3213513 00:07:56.238 17:54:45 -- common/autotest_common.sh@936 -- # '[' -z 3213513 ']' 00:07:56.238 17:54:45 -- common/autotest_common.sh@940 -- # kill -0 3213513 00:07:56.238 17:54:45 -- common/autotest_common.sh@941 -- # uname 00:07:56.238 17:54:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:56.238 17:54:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3213513 00:07:56.238 17:54:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:56.238 17:54:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:56.238 17:54:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3213513' 00:07:56.238 killing process with pid 3213513 00:07:56.238 17:54:45 -- common/autotest_common.sh@955 -- # kill 3213513 00:07:56.238 17:54:45 -- common/autotest_common.sh@960 -- # wait 3213513 00:07:56.805 00:07:56.805 real 0m2.322s 00:07:56.805 user 0m2.710s 00:07:56.805 sys 0m0.642s 00:07:56.805 17:54:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:56.805 17:54:45 -- common/autotest_common.sh@10 -- # set +x 00:07:56.805 ************************************ 00:07:56.805 END TEST accel_rpc 00:07:56.805 ************************************ 00:07:56.805 17:54:45 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:56.805 17:54:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:56.805 17:54:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.805 17:54:45 -- common/autotest_common.sh@10 -- # set +x 00:07:56.805 ************************************ 00:07:56.805 START TEST app_cmdline 00:07:56.805 ************************************ 00:07:56.805 17:54:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:56.805 * Looking for test storage... 00:07:56.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:56.805 17:54:45 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:56.805 17:54:45 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3213869 00:07:56.805 17:54:45 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:56.805 17:54:45 -- app/cmdline.sh@18 -- # waitforlisten 3213869 00:07:56.805 17:54:45 -- common/autotest_common.sh@817 -- # '[' -z 3213869 ']' 00:07:56.805 17:54:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.805 17:54:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:56.805 17:54:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.805 17:54:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:56.805 17:54:45 -- common/autotest_common.sh@10 -- # set +x 00:07:57.063 [2024-04-15 17:54:45.774311] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:07:57.063 [2024-04-15 17:54:45.774399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3213869 ] 00:07:57.063 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.063 [2024-04-15 17:54:45.841828] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.063 [2024-04-15 17:54:45.934042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.321 17:54:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:57.321 17:54:46 -- common/autotest_common.sh@850 -- # return 0 00:07:57.321 17:54:46 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:57.580 { 00:07:57.581 "version": "SPDK v24.05-pre git sha1 26d44a121", 00:07:57.581 "fields": { 00:07:57.581 "major": 24, 00:07:57.581 "minor": 5, 00:07:57.581 "patch": 0, 00:07:57.581 "suffix": "-pre", 00:07:57.581 "commit": "26d44a121" 00:07:57.581 } 00:07:57.581 } 00:07:57.581 17:54:46 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:57.581 17:54:46 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:57.581 17:54:46 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:57.581 17:54:46 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:57.581 17:54:46 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:57.581 17:54:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:57.581 17:54:46 -- common/autotest_common.sh@10 -- # set +x 00:07:57.581 17:54:46 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:57.581 17:54:46 -- app/cmdline.sh@26 -- # sort 00:07:57.581 17:54:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:57.581 17:54:46 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:57.581 17:54:46 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:57.581 17:54:46 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:57.581 17:54:46 -- common/autotest_common.sh@638 -- # local es=0 00:07:57.581 17:54:46 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:57.581 17:54:46 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.581 17:54:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:57.581 17:54:46 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.581 17:54:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:57.581 17:54:46 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.581 17:54:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:57.581 17:54:46 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.581 17:54:46 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:57.581 17:54:46 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:58.150 request: 00:07:58.150 { 00:07:58.150 "method": "env_dpdk_get_mem_stats", 00:07:58.150 "req_id": 1 00:07:58.150 } 00:07:58.150 Got JSON-RPC error response 00:07:58.150 response: 00:07:58.150 { 00:07:58.150 "code": -32601, 00:07:58.150 "message": "Method not found" 00:07:58.150 } 00:07:58.150 17:54:47 -- common/autotest_common.sh@641 -- # es=1 00:07:58.150 17:54:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:58.150 17:54:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:58.150 17:54:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:58.150 17:54:47 -- app/cmdline.sh@1 -- # killprocess 3213869 00:07:58.150 17:54:47 -- common/autotest_common.sh@936 -- # '[' -z 3213869 ']' 00:07:58.150 17:54:47 -- common/autotest_common.sh@940 -- # kill -0 3213869 00:07:58.150 17:54:47 -- common/autotest_common.sh@941 -- # uname 00:07:58.150 17:54:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:58.150 17:54:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3213869 00:07:58.150 17:54:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:58.150 17:54:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:58.150 17:54:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3213869' 00:07:58.150 killing process with pid 3213869 00:07:58.150 17:54:47 -- common/autotest_common.sh@955 -- # kill 3213869 00:07:58.150 17:54:47 -- common/autotest_common.sh@960 -- # wait 3213869 00:07:58.718 00:07:58.718 real 0m1.866s 00:07:58.718 user 0m2.529s 00:07:58.718 sys 0m0.538s 00:07:58.718 17:54:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:58.718 17:54:47 -- common/autotest_common.sh@10 -- # set +x 00:07:58.718 ************************************ 00:07:58.718 END TEST app_cmdline 00:07:58.718 ************************************ 00:07:58.718 17:54:47 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:58.718 17:54:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.718 17:54:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.718 17:54:47 -- common/autotest_common.sh@10 -- # set +x 00:07:58.718 ************************************ 00:07:58.718 START TEST version 00:07:58.718 ************************************ 00:07:58.718 17:54:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:58.978 * Looking for test storage... 00:07:58.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:58.978 17:54:47 -- app/version.sh@17 -- # get_header_version major 00:07:58.978 17:54:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:58.978 17:54:47 -- app/version.sh@14 -- # cut -f2 00:07:58.978 17:54:47 -- app/version.sh@14 -- # tr -d '"' 00:07:58.978 17:54:47 -- app/version.sh@17 -- # major=24 00:07:58.978 17:54:47 -- app/version.sh@18 -- # get_header_version minor 00:07:58.978 17:54:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:58.978 17:54:47 -- app/version.sh@14 -- # cut -f2 00:07:58.978 17:54:47 -- app/version.sh@14 -- # tr -d '"' 00:07:58.978 17:54:47 -- app/version.sh@18 -- # minor=5 00:07:58.978 17:54:47 -- app/version.sh@19 -- # get_header_version patch 00:07:58.978 17:54:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:58.978 17:54:47 -- app/version.sh@14 -- # cut -f2 00:07:58.978 17:54:47 -- app/version.sh@14 -- # tr -d '"' 00:07:58.978 17:54:47 -- app/version.sh@19 -- # patch=0 00:07:58.978 17:54:47 -- app/version.sh@20 -- # get_header_version suffix 00:07:58.978 17:54:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:58.978 17:54:47 -- app/version.sh@14 -- # cut -f2 00:07:58.978 17:54:47 -- app/version.sh@14 -- # tr -d '"' 00:07:58.978 17:54:47 -- app/version.sh@20 -- # suffix=-pre 00:07:58.978 17:54:47 -- app/version.sh@22 -- # version=24.5 00:07:58.978 17:54:47 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:58.978 17:54:47 -- app/version.sh@28 -- # version=24.5rc0 00:07:58.978 17:54:47 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:58.978 17:54:47 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:58.978 17:54:47 -- app/version.sh@30 -- # py_version=24.5rc0 00:07:58.978 17:54:47 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:58.978 00:07:58.978 real 0m0.162s 00:07:58.978 user 0m0.098s 00:07:58.978 sys 0m0.093s 00:07:58.978 17:54:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:58.978 17:54:47 -- common/autotest_common.sh@10 -- # set +x 00:07:58.978 ************************************ 00:07:58.978 END TEST version 00:07:58.978 ************************************ 00:07:58.978 17:54:47 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:07:58.978 17:54:47 -- spdk/autotest.sh@194 -- # uname -s 00:07:58.978 17:54:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:58.978 17:54:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:58.978 17:54:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:58.978 17:54:47 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:58.978 17:54:47 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:07:58.978 17:54:47 -- spdk/autotest.sh@258 -- # timing_exit lib 00:07:58.978 17:54:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:58.978 17:54:47 -- common/autotest_common.sh@10 -- # set +x 00:07:58.978 17:54:47 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:58.978 17:54:47 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:07:58.978 17:54:47 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:07:58.978 17:54:47 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:07:58.978 17:54:47 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:07:58.978 17:54:47 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:07:58.978 17:54:47 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:58.978 17:54:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:58.978 17:54:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.978 17:54:47 -- common/autotest_common.sh@10 -- # set +x 00:07:59.238 ************************************ 00:07:59.238 START TEST nvmf_tcp 00:07:59.238 ************************************ 00:07:59.238 17:54:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:59.238 * Looking for test storage... 00:07:59.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:59.238 17:54:48 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:59.238 17:54:48 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:59.238 17:54:48 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.238 17:54:48 -- nvmf/common.sh@7 -- # uname -s 00:07:59.238 17:54:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.238 17:54:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.238 17:54:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.238 17:54:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.238 17:54:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.238 17:54:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.238 17:54:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.238 17:54:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.238 17:54:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.238 17:54:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.238 17:54:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:59.238 17:54:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:59.238 17:54:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.238 17:54:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.238 17:54:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.238 17:54:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.238 17:54:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.238 17:54:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.238 17:54:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.238 17:54:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.238 17:54:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.238 17:54:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.238 17:54:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.238 17:54:48 -- paths/export.sh@5 -- # export PATH 00:07:59.238 17:54:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.238 17:54:48 -- nvmf/common.sh@47 -- # : 0 00:07:59.238 17:54:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:59.238 17:54:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:59.238 17:54:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.238 17:54:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.238 17:54:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.238 17:54:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:59.238 17:54:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:59.238 17:54:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:59.238 17:54:48 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:59.238 17:54:48 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:59.238 17:54:48 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:59.238 17:54:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:59.238 17:54:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.238 17:54:48 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:59.238 17:54:48 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:59.238 17:54:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:59.238 17:54:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.238 17:54:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.497 ************************************ 00:07:59.497 START TEST nvmf_example 00:07:59.497 ************************************ 00:07:59.497 17:54:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:59.497 * Looking for test storage... 00:07:59.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.497 17:54:48 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.497 17:54:48 -- nvmf/common.sh@7 -- # uname -s 00:07:59.497 17:54:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.497 17:54:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.497 17:54:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.497 17:54:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.497 17:54:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.497 17:54:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.497 17:54:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.497 17:54:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.497 17:54:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.497 17:54:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.497 17:54:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:59.497 17:54:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:59.497 17:54:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.497 17:54:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.497 17:54:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.497 17:54:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.498 17:54:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.498 17:54:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.498 17:54:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.498 17:54:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.498 17:54:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.498 17:54:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.498 17:54:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.498 17:54:48 -- paths/export.sh@5 -- # export PATH 00:07:59.498 17:54:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.498 17:54:48 -- nvmf/common.sh@47 -- # : 0 00:07:59.498 17:54:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:59.498 17:54:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:59.498 17:54:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.498 17:54:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.498 17:54:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.498 17:54:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:59.498 17:54:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:59.498 17:54:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:59.498 17:54:48 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:59.498 17:54:48 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:59.498 17:54:48 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:59.498 17:54:48 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:59.498 17:54:48 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:59.498 17:54:48 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:59.498 17:54:48 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:59.498 17:54:48 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:59.498 17:54:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:59.498 17:54:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.498 17:54:48 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:59.498 17:54:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:59.498 17:54:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.498 17:54:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:59.498 17:54:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:59.498 17:54:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:59.498 17:54:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.498 17:54:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.498 17:54:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.498 17:54:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:59.498 17:54:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:59.498 17:54:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:59.498 17:54:48 -- common/autotest_common.sh@10 -- # set +x 00:08:02.029 17:54:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:02.029 17:54:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:02.029 17:54:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:02.029 17:54:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:02.029 17:54:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:02.029 17:54:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:02.029 17:54:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:02.029 17:54:50 -- nvmf/common.sh@295 -- # net_devs=() 00:08:02.029 17:54:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:02.029 17:54:50 -- nvmf/common.sh@296 -- # e810=() 00:08:02.029 17:54:50 -- nvmf/common.sh@296 -- # local -ga e810 00:08:02.029 17:54:50 -- nvmf/common.sh@297 -- # x722=() 00:08:02.029 17:54:50 -- nvmf/common.sh@297 -- # local -ga x722 00:08:02.029 17:54:50 -- nvmf/common.sh@298 -- # mlx=() 00:08:02.029 17:54:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:02.029 17:54:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.029 17:54:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.029 17:54:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.029 17:54:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.029 17:54:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.029 17:54:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.029 17:54:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.029 17:54:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.029 17:54:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.029 17:54:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.029 17:54:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.029 17:54:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:02.029 17:54:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:02.029 17:54:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:02.029 17:54:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:02.029 17:54:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:02.029 17:54:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:02.029 17:54:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.029 17:54:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:02.029 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:02.029 17:54:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.029 17:54:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.029 17:54:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.029 17:54:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.029 17:54:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.029 17:54:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.029 17:54:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:02.029 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:02.029 17:54:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.029 17:54:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.029 17:54:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.029 17:54:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.029 17:54:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.029 17:54:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:02.029 17:54:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:02.029 17:54:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:02.029 17:54:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.029 17:54:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.029 17:54:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:02.029 17:54:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.029 17:54:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:02.029 Found net devices under 0000:84:00.0: cvl_0_0 00:08:02.029 17:54:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.029 17:54:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.029 17:54:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.029 17:54:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:02.029 17:54:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.029 17:54:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:02.029 Found net devices under 0000:84:00.1: cvl_0_1 00:08:02.029 17:54:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.029 17:54:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:02.029 17:54:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:02.029 17:54:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:02.029 17:54:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:02.029 17:54:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:02.029 17:54:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.029 17:54:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.029 17:54:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.029 17:54:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:02.029 17:54:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.029 17:54:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.029 17:54:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:02.029 17:54:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.029 17:54:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.029 17:54:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:02.029 17:54:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:02.029 17:54:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.029 17:54:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.029 17:54:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.029 17:54:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.029 17:54:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:02.029 17:54:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.029 17:54:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.029 17:54:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.029 17:54:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:02.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:08:02.029 00:08:02.030 --- 10.0.0.2 ping statistics --- 00:08:02.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.030 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:08:02.030 17:54:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:08:02.030 00:08:02.030 --- 10.0.0.1 ping statistics --- 00:08:02.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.030 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:08:02.030 17:54:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.030 17:54:50 -- nvmf/common.sh@411 -- # return 0 00:08:02.030 17:54:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:02.030 17:54:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.030 17:54:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:02.030 17:54:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:02.030 17:54:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.030 17:54:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:02.030 17:54:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:02.030 17:54:50 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:02.030 17:54:50 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:02.030 17:54:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:02.030 17:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:02.030 17:54:50 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:02.030 17:54:50 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:02.030 17:54:50 -- target/nvmf_example.sh@34 -- # nvmfpid=3215937 00:08:02.030 17:54:50 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:02.030 17:54:50 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:02.030 17:54:50 -- target/nvmf_example.sh@36 -- # waitforlisten 3215937 00:08:02.030 17:54:50 -- common/autotest_common.sh@817 -- # '[' -z 3215937 ']' 00:08:02.030 17:54:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.030 17:54:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:02.030 17:54:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.030 17:54:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:02.030 17:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:02.030 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.288 17:54:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:02.288 17:54:51 -- common/autotest_common.sh@850 -- # return 0 00:08:02.288 17:54:51 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:02.288 17:54:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:02.288 17:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:02.288 17:54:51 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:02.288 17:54:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:02.288 17:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:02.288 17:54:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:02.288 17:54:51 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:02.288 17:54:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:02.288 17:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:02.288 17:54:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:02.288 17:54:51 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:02.288 17:54:51 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:02.288 17:54:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:02.288 17:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:02.288 17:54:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:02.288 17:54:51 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:02.288 17:54:51 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:02.288 17:54:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:02.288 17:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:02.288 17:54:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:02.288 17:54:51 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.288 17:54:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:02.288 17:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:02.288 17:54:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:02.288 17:54:51 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:02.288 17:54:51 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:02.288 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.483 Initializing NVMe Controllers 00:08:14.483 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:14.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:14.483 Initialization complete. Launching workers. 00:08:14.483 ======================================================== 00:08:14.483 Latency(us) 00:08:14.483 Device Information : IOPS MiB/s Average min max 00:08:14.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14876.31 58.11 4302.18 887.51 17457.28 00:08:14.483 ======================================================== 00:08:14.483 Total : 14876.31 58.11 4302.18 887.51 17457.28 00:08:14.483 00:08:14.483 17:55:01 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:14.483 17:55:01 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:14.483 17:55:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:14.483 17:55:01 -- nvmf/common.sh@117 -- # sync 00:08:14.483 17:55:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.483 17:55:01 -- nvmf/common.sh@120 -- # set +e 00:08:14.483 17:55:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.483 17:55:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.483 rmmod nvme_tcp 00:08:14.483 rmmod nvme_fabrics 00:08:14.483 rmmod nvme_keyring 00:08:14.483 17:55:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.483 17:55:01 -- nvmf/common.sh@124 -- # set -e 00:08:14.483 17:55:01 -- nvmf/common.sh@125 -- # return 0 00:08:14.483 17:55:01 -- nvmf/common.sh@478 -- # '[' -n 3215937 ']' 00:08:14.483 17:55:01 -- nvmf/common.sh@479 -- # killprocess 3215937 00:08:14.483 17:55:01 -- common/autotest_common.sh@936 -- # '[' -z 3215937 ']' 00:08:14.483 17:55:01 -- common/autotest_common.sh@940 -- # kill -0 3215937 00:08:14.483 17:55:01 -- common/autotest_common.sh@941 -- # uname 00:08:14.483 17:55:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:14.483 17:55:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3215937 00:08:14.483 17:55:01 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:14.483 17:55:01 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:14.483 17:55:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3215937' 00:08:14.483 killing process with pid 3215937 00:08:14.483 17:55:01 -- common/autotest_common.sh@955 -- # kill 3215937 00:08:14.483 17:55:01 -- common/autotest_common.sh@960 -- # wait 3215937 00:08:14.483 nvmf threads initialize successfully 00:08:14.483 bdev subsystem init successfully 00:08:14.483 created a nvmf target service 00:08:14.483 create targets's poll groups done 00:08:14.483 all subsystems of target started 00:08:14.483 nvmf target is running 00:08:14.483 all subsystems of target stopped 00:08:14.483 destroy targets's poll groups done 00:08:14.483 destroyed the nvmf target service 00:08:14.483 bdev subsystem finish successfully 00:08:14.483 nvmf threads destroy successfully 00:08:14.483 17:55:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:14.483 17:55:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:14.483 17:55:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:14.483 17:55:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.483 17:55:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:14.483 17:55:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.483 17:55:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.483 17:55:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.048 17:55:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:15.048 17:55:03 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:15.048 17:55:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:15.048 17:55:03 -- common/autotest_common.sh@10 -- # set +x 00:08:15.048 00:08:15.048 real 0m15.655s 00:08:15.048 user 0m42.265s 00:08:15.048 sys 0m3.857s 00:08:15.048 17:55:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:15.048 17:55:03 -- common/autotest_common.sh@10 -- # set +x 00:08:15.048 ************************************ 00:08:15.048 END TEST nvmf_example 00:08:15.049 ************************************ 00:08:15.049 17:55:03 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:15.049 17:55:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:15.049 17:55:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.049 17:55:03 -- common/autotest_common.sh@10 -- # set +x 00:08:15.307 ************************************ 00:08:15.307 START TEST nvmf_filesystem 00:08:15.307 ************************************ 00:08:15.307 17:55:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:15.307 * Looking for test storage... 00:08:15.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.307 17:55:04 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:15.307 17:55:04 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:15.307 17:55:04 -- common/autotest_common.sh@34 -- # set -e 00:08:15.307 17:55:04 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:15.307 17:55:04 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:15.307 17:55:04 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:15.307 17:55:04 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:15.307 17:55:04 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:15.307 17:55:04 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:15.307 17:55:04 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:15.307 17:55:04 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:15.307 17:55:04 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:15.307 17:55:04 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:15.307 17:55:04 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:15.307 17:55:04 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:15.307 17:55:04 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:15.307 17:55:04 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:15.307 17:55:04 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:15.307 17:55:04 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:15.307 17:55:04 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:15.307 17:55:04 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:15.307 17:55:04 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:15.307 17:55:04 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:15.307 17:55:04 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:15.307 17:55:04 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:15.307 17:55:04 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:15.307 17:55:04 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:15.307 17:55:04 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:15.307 17:55:04 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:15.307 17:55:04 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:15.307 17:55:04 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:15.307 17:55:04 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:15.307 17:55:04 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:15.307 17:55:04 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:15.307 17:55:04 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:15.307 17:55:04 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:15.307 17:55:04 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:15.308 17:55:04 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:15.308 17:55:04 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:15.308 17:55:04 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:15.308 17:55:04 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:15.308 17:55:04 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:15.308 17:55:04 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:15.308 17:55:04 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:15.308 17:55:04 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:15.308 17:55:04 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:15.308 17:55:04 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:15.308 17:55:04 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:15.308 17:55:04 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:15.308 17:55:04 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:15.308 17:55:04 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:15.308 17:55:04 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:15.308 17:55:04 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:15.308 17:55:04 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:15.308 17:55:04 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:15.308 17:55:04 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:15.308 17:55:04 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:15.308 17:55:04 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:15.308 17:55:04 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:15.308 17:55:04 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:15.308 17:55:04 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:08:15.308 17:55:04 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:08:15.308 17:55:04 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:08:15.308 17:55:04 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:08:15.308 17:55:04 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:08:15.308 17:55:04 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:08:15.308 17:55:04 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:08:15.308 17:55:04 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:08:15.308 17:55:04 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:08:15.308 17:55:04 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:15.308 17:55:04 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:08:15.308 17:55:04 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:08:15.308 17:55:04 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:08:15.308 17:55:04 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:08:15.308 17:55:04 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:08:15.308 17:55:04 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:15.308 17:55:04 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:08:15.308 17:55:04 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:08:15.308 17:55:04 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:08:15.308 17:55:04 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:08:15.308 17:55:04 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:08:15.308 17:55:04 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:08:15.308 17:55:04 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:08:15.308 17:55:04 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:08:15.308 17:55:04 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:08:15.308 17:55:04 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:08:15.308 17:55:04 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:08:15.308 17:55:04 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:15.308 17:55:04 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:08:15.308 17:55:04 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:08:15.308 17:55:04 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:15.308 17:55:04 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:15.308 17:55:04 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:15.308 17:55:04 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:15.308 17:55:04 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:15.308 17:55:04 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:15.308 17:55:04 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:15.308 17:55:04 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:15.308 17:55:04 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:15.308 17:55:04 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:15.308 17:55:04 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:15.308 17:55:04 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:15.308 17:55:04 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:15.308 17:55:04 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:15.308 17:55:04 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:15.308 17:55:04 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:15.308 #define SPDK_CONFIG_H 00:08:15.308 #define SPDK_CONFIG_APPS 1 00:08:15.308 #define SPDK_CONFIG_ARCH native 00:08:15.308 #undef SPDK_CONFIG_ASAN 00:08:15.308 #undef SPDK_CONFIG_AVAHI 00:08:15.308 #undef SPDK_CONFIG_CET 00:08:15.308 #define SPDK_CONFIG_COVERAGE 1 00:08:15.308 #define SPDK_CONFIG_CROSS_PREFIX 00:08:15.308 #undef SPDK_CONFIG_CRYPTO 00:08:15.308 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:15.308 #undef SPDK_CONFIG_CUSTOMOCF 00:08:15.308 #undef SPDK_CONFIG_DAOS 00:08:15.308 #define SPDK_CONFIG_DAOS_DIR 00:08:15.308 #define SPDK_CONFIG_DEBUG 1 00:08:15.308 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:15.308 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:15.308 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:15.308 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:15.308 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:15.308 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:15.308 #define SPDK_CONFIG_EXAMPLES 1 00:08:15.308 #undef SPDK_CONFIG_FC 00:08:15.308 #define SPDK_CONFIG_FC_PATH 00:08:15.308 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:15.308 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:15.308 #undef SPDK_CONFIG_FUSE 00:08:15.308 #undef SPDK_CONFIG_FUZZER 00:08:15.308 #define SPDK_CONFIG_FUZZER_LIB 00:08:15.308 #undef SPDK_CONFIG_GOLANG 00:08:15.308 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:15.308 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:15.308 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:15.308 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:08:15.308 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:15.308 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:15.308 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:15.308 #define SPDK_CONFIG_IDXD 1 00:08:15.308 #undef SPDK_CONFIG_IDXD_KERNEL 00:08:15.308 #undef SPDK_CONFIG_IPSEC_MB 00:08:15.308 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:15.308 #define SPDK_CONFIG_ISAL 1 00:08:15.308 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:15.308 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:15.308 #define SPDK_CONFIG_LIBDIR 00:08:15.308 #undef SPDK_CONFIG_LTO 00:08:15.308 #define SPDK_CONFIG_MAX_LCORES 00:08:15.308 #define SPDK_CONFIG_NVME_CUSE 1 00:08:15.308 #undef SPDK_CONFIG_OCF 00:08:15.308 #define SPDK_CONFIG_OCF_PATH 00:08:15.308 #define SPDK_CONFIG_OPENSSL_PATH 00:08:15.308 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:15.308 #define SPDK_CONFIG_PGO_DIR 00:08:15.308 #undef SPDK_CONFIG_PGO_USE 00:08:15.308 #define SPDK_CONFIG_PREFIX /usr/local 00:08:15.308 #undef SPDK_CONFIG_RAID5F 00:08:15.308 #undef SPDK_CONFIG_RBD 00:08:15.308 #define SPDK_CONFIG_RDMA 1 00:08:15.308 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:15.308 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:15.308 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:15.308 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:15.308 #define SPDK_CONFIG_SHARED 1 00:08:15.308 #undef SPDK_CONFIG_SMA 00:08:15.308 #define SPDK_CONFIG_TESTS 1 00:08:15.308 #undef SPDK_CONFIG_TSAN 00:08:15.308 #define SPDK_CONFIG_UBLK 1 00:08:15.308 #define SPDK_CONFIG_UBSAN 1 00:08:15.308 #undef SPDK_CONFIG_UNIT_TESTS 00:08:15.308 #undef SPDK_CONFIG_URING 00:08:15.308 #define SPDK_CONFIG_URING_PATH 00:08:15.308 #undef SPDK_CONFIG_URING_ZNS 00:08:15.308 #undef SPDK_CONFIG_USDT 00:08:15.308 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:15.308 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:15.308 #define SPDK_CONFIG_VFIO_USER 1 00:08:15.308 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:15.308 #define SPDK_CONFIG_VHOST 1 00:08:15.308 #define SPDK_CONFIG_VIRTIO 1 00:08:15.308 #undef SPDK_CONFIG_VTUNE 00:08:15.308 #define SPDK_CONFIG_VTUNE_DIR 00:08:15.308 #define SPDK_CONFIG_WERROR 1 00:08:15.308 #define SPDK_CONFIG_WPDK_DIR 00:08:15.308 #undef SPDK_CONFIG_XNVME 00:08:15.308 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:15.308 17:55:04 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:15.308 17:55:04 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.308 17:55:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.308 17:55:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.308 17:55:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.308 17:55:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.308 17:55:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.308 17:55:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.308 17:55:04 -- paths/export.sh@5 -- # export PATH 00:08:15.308 17:55:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.308 17:55:04 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:15.308 17:55:04 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:15.308 17:55:04 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:15.308 17:55:04 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:15.308 17:55:04 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:15.308 17:55:04 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:15.308 17:55:04 -- pm/common@67 -- # TEST_TAG=N/A 00:08:15.308 17:55:04 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:15.308 17:55:04 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:15.308 17:55:04 -- pm/common@71 -- # uname -s 00:08:15.308 17:55:04 -- pm/common@71 -- # PM_OS=Linux 00:08:15.308 17:55:04 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:15.308 17:55:04 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:08:15.308 17:55:04 -- pm/common@76 -- # [[ Linux == Linux ]] 00:08:15.308 17:55:04 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:08:15.308 17:55:04 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:08:15.308 17:55:04 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:15.308 17:55:04 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:15.308 17:55:04 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:08:15.308 17:55:04 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:08:15.308 17:55:04 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:15.308 17:55:04 -- common/autotest_common.sh@57 -- # : 1 00:08:15.308 17:55:04 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:15.308 17:55:04 -- common/autotest_common.sh@61 -- # : 0 00:08:15.308 17:55:04 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:15.308 17:55:04 -- common/autotest_common.sh@63 -- # : 0 00:08:15.308 17:55:04 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:15.308 17:55:04 -- common/autotest_common.sh@65 -- # : 1 00:08:15.308 17:55:04 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:15.308 17:55:04 -- common/autotest_common.sh@67 -- # : 0 00:08:15.308 17:55:04 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:15.308 17:55:04 -- common/autotest_common.sh@69 -- # : 00:08:15.308 17:55:04 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:15.308 17:55:04 -- common/autotest_common.sh@71 -- # : 0 00:08:15.308 17:55:04 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:15.308 17:55:04 -- common/autotest_common.sh@73 -- # : 0 00:08:15.308 17:55:04 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:15.308 17:55:04 -- common/autotest_common.sh@75 -- # : 0 00:08:15.308 17:55:04 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:15.308 17:55:04 -- common/autotest_common.sh@77 -- # : 0 00:08:15.308 17:55:04 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:15.308 17:55:04 -- common/autotest_common.sh@79 -- # : 0 00:08:15.308 17:55:04 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:15.308 17:55:04 -- common/autotest_common.sh@81 -- # : 0 00:08:15.308 17:55:04 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:15.308 17:55:04 -- common/autotest_common.sh@83 -- # : 0 00:08:15.308 17:55:04 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:15.309 17:55:04 -- common/autotest_common.sh@85 -- # : 1 00:08:15.309 17:55:04 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:15.309 17:55:04 -- common/autotest_common.sh@87 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:15.309 17:55:04 -- common/autotest_common.sh@89 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:15.309 17:55:04 -- common/autotest_common.sh@91 -- # : 1 00:08:15.309 17:55:04 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:15.309 17:55:04 -- common/autotest_common.sh@93 -- # : 1 00:08:15.309 17:55:04 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:15.309 17:55:04 -- common/autotest_common.sh@95 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:15.309 17:55:04 -- common/autotest_common.sh@97 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:15.309 17:55:04 -- common/autotest_common.sh@99 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:15.309 17:55:04 -- common/autotest_common.sh@101 -- # : tcp 00:08:15.309 17:55:04 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:15.309 17:55:04 -- common/autotest_common.sh@103 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:15.309 17:55:04 -- common/autotest_common.sh@105 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:15.309 17:55:04 -- common/autotest_common.sh@107 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:15.309 17:55:04 -- common/autotest_common.sh@109 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:15.309 17:55:04 -- common/autotest_common.sh@111 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:15.309 17:55:04 -- common/autotest_common.sh@113 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:15.309 17:55:04 -- common/autotest_common.sh@115 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:15.309 17:55:04 -- common/autotest_common.sh@117 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:15.309 17:55:04 -- common/autotest_common.sh@119 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:15.309 17:55:04 -- common/autotest_common.sh@121 -- # : 1 00:08:15.309 17:55:04 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:15.309 17:55:04 -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:15.309 17:55:04 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:15.309 17:55:04 -- common/autotest_common.sh@125 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:15.309 17:55:04 -- common/autotest_common.sh@127 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:15.309 17:55:04 -- common/autotest_common.sh@129 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:15.309 17:55:04 -- common/autotest_common.sh@131 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:15.309 17:55:04 -- common/autotest_common.sh@133 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:15.309 17:55:04 -- common/autotest_common.sh@135 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:15.309 17:55:04 -- common/autotest_common.sh@137 -- # : v22.11.4 00:08:15.309 17:55:04 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:15.309 17:55:04 -- common/autotest_common.sh@139 -- # : true 00:08:15.309 17:55:04 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:15.309 17:55:04 -- common/autotest_common.sh@141 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:15.309 17:55:04 -- common/autotest_common.sh@143 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:15.309 17:55:04 -- common/autotest_common.sh@145 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:15.309 17:55:04 -- common/autotest_common.sh@147 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:15.309 17:55:04 -- common/autotest_common.sh@149 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:15.309 17:55:04 -- common/autotest_common.sh@151 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:15.309 17:55:04 -- common/autotest_common.sh@153 -- # : e810 00:08:15.309 17:55:04 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:15.309 17:55:04 -- common/autotest_common.sh@155 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:15.309 17:55:04 -- common/autotest_common.sh@157 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:15.309 17:55:04 -- common/autotest_common.sh@159 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:15.309 17:55:04 -- common/autotest_common.sh@161 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:15.309 17:55:04 -- common/autotest_common.sh@163 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:15.309 17:55:04 -- common/autotest_common.sh@166 -- # : 00:08:15.309 17:55:04 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:15.309 17:55:04 -- common/autotest_common.sh@168 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:15.309 17:55:04 -- common/autotest_common.sh@170 -- # : 0 00:08:15.309 17:55:04 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:15.309 17:55:04 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:15.309 17:55:04 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:15.309 17:55:04 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:15.309 17:55:04 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:15.309 17:55:04 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:15.309 17:55:04 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:15.309 17:55:04 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:15.309 17:55:04 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:15.309 17:55:04 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:15.309 17:55:04 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:15.309 17:55:04 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:15.309 17:55:04 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:15.309 17:55:04 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:15.309 17:55:04 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:15.309 17:55:04 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:15.309 17:55:04 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:15.309 17:55:04 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:15.309 17:55:04 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:15.309 17:55:04 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:15.309 17:55:04 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:15.309 17:55:04 -- common/autotest_common.sh@199 -- # cat 00:08:15.309 17:55:04 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:08:15.309 17:55:04 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:15.309 17:55:04 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:15.309 17:55:04 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:15.309 17:55:04 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:15.309 17:55:04 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:08:15.309 17:55:04 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:08:15.309 17:55:04 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:15.309 17:55:04 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:15.309 17:55:04 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:15.309 17:55:04 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:15.309 17:55:04 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:15.309 17:55:04 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:15.309 17:55:04 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:15.309 17:55:04 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:15.309 17:55:04 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:15.309 17:55:04 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:15.309 17:55:04 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:15.309 17:55:04 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:15.309 17:55:04 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:08:15.309 17:55:04 -- common/autotest_common.sh@252 -- # export valgrind= 00:08:15.309 17:55:04 -- common/autotest_common.sh@252 -- # valgrind= 00:08:15.309 17:55:04 -- common/autotest_common.sh@258 -- # uname -s 00:08:15.309 17:55:04 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:08:15.309 17:55:04 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:08:15.309 17:55:04 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:08:15.309 17:55:04 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:08:15.309 17:55:04 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:08:15.309 17:55:04 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:08:15.309 17:55:04 -- common/autotest_common.sh@268 -- # MAKE=make 00:08:15.309 17:55:04 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:08:15.309 17:55:04 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:08:15.309 17:55:04 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:08:15.309 17:55:04 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:08:15.309 17:55:04 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:08:15.309 17:55:04 -- common/autotest_common.sh@289 -- # for i in "$@" 00:08:15.309 17:55:04 -- common/autotest_common.sh@290 -- # case "$i" in 00:08:15.309 17:55:04 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:08:15.309 17:55:04 -- common/autotest_common.sh@307 -- # [[ -z 3217638 ]] 00:08:15.309 17:55:04 -- common/autotest_common.sh@307 -- # kill -0 3217638 00:08:15.309 17:55:04 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:08:15.309 17:55:04 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:08:15.309 17:55:04 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:08:15.309 17:55:04 -- common/autotest_common.sh@320 -- # local mount target_dir 00:08:15.309 17:55:04 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:08:15.309 17:55:04 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:08:15.309 17:55:04 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:08:15.309 17:55:04 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:08:15.309 17:55:04 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.vhjfrg 00:08:15.309 17:55:04 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:15.309 17:55:04 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:08:15.309 17:55:04 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:08:15.309 17:55:04 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.vhjfrg/tests/target /tmp/spdk.vhjfrg 00:08:15.309 17:55:04 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:08:15.309 17:55:04 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:15.309 17:55:04 -- common/autotest_common.sh@316 -- # df -T 00:08:15.309 17:55:04 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:08:15.309 17:55:04 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:08:15.309 17:55:04 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:08:15.309 17:55:04 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:08:15.310 17:55:04 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:08:15.310 17:55:04 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:08:15.310 17:55:04 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:15.310 17:55:04 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:08:15.310 17:55:04 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:08:15.310 17:55:04 -- common/autotest_common.sh@351 -- # avails["$mount"]=996237312 00:08:15.310 17:55:04 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:08:15.310 17:55:04 -- common/autotest_common.sh@352 -- # uses["$mount"]=4288192512 00:08:15.310 17:55:04 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:15.310 17:55:04 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:08:15.310 17:55:04 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:08:15.310 17:55:04 -- common/autotest_common.sh@351 -- # avails["$mount"]=34140340224 00:08:15.310 17:55:04 -- common/autotest_common.sh@351 -- # sizes["$mount"]=45083308032 00:08:15.310 17:55:04 -- common/autotest_common.sh@352 -- # uses["$mount"]=10942967808 00:08:15.310 17:55:04 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:15.310 17:55:04 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:15.310 17:55:04 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:15.310 17:55:04 -- common/autotest_common.sh@351 -- # avails["$mount"]=22540374016 00:08:15.310 17:55:04 -- common/autotest_common.sh@351 -- # sizes["$mount"]=22541651968 00:08:15.310 17:55:04 -- common/autotest_common.sh@352 -- # uses["$mount"]=1277952 00:08:15.310 17:55:04 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:15.310 17:55:04 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:15.310 17:55:04 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:15.310 17:55:04 -- common/autotest_common.sh@351 -- # avails["$mount"]=9007857664 00:08:15.310 17:55:04 -- common/autotest_common.sh@351 -- # sizes["$mount"]=9016664064 00:08:15.310 17:55:04 -- common/autotest_common.sh@352 -- # uses["$mount"]=8806400 00:08:15.310 17:55:04 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:15.310 17:55:04 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:15.310 17:55:04 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:15.310 17:55:04 -- common/autotest_common.sh@351 -- # avails["$mount"]=22541201408 00:08:15.310 17:55:04 -- common/autotest_common.sh@351 -- # sizes["$mount"]=22541656064 00:08:15.310 17:55:04 -- common/autotest_common.sh@352 -- # uses["$mount"]=454656 00:08:15.310 17:55:04 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:15.310 17:55:04 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:15.310 17:55:04 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:15.310 17:55:04 -- common/autotest_common.sh@351 -- # avails["$mount"]=4508323840 00:08:15.310 17:55:04 -- common/autotest_common.sh@351 -- # sizes["$mount"]=4508327936 00:08:15.310 17:55:04 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:08:15.310 17:55:04 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:15.310 17:55:04 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:15.310 17:55:04 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:15.310 17:55:04 -- common/autotest_common.sh@351 -- # avails["$mount"]=4508323840 00:08:15.310 17:55:04 -- common/autotest_common.sh@351 -- # sizes["$mount"]=4508327936 00:08:15.310 17:55:04 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:08:15.310 17:55:04 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:15.310 17:55:04 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:08:15.310 * Looking for test storage... 00:08:15.310 17:55:04 -- common/autotest_common.sh@357 -- # local target_space new_size 00:08:15.310 17:55:04 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:08:15.310 17:55:04 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.310 17:55:04 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:15.310 17:55:04 -- common/autotest_common.sh@361 -- # mount=/ 00:08:15.310 17:55:04 -- common/autotest_common.sh@363 -- # target_space=34140340224 00:08:15.310 17:55:04 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:08:15.310 17:55:04 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:08:15.310 17:55:04 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:08:15.310 17:55:04 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:08:15.310 17:55:04 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:08:15.310 17:55:04 -- common/autotest_common.sh@370 -- # new_size=13157560320 00:08:15.310 17:55:04 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:15.310 17:55:04 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.310 17:55:04 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.310 17:55:04 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.310 17:55:04 -- common/autotest_common.sh@378 -- # return 0 00:08:15.310 17:55:04 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:08:15.310 17:55:04 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:08:15.310 17:55:04 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:15.310 17:55:04 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:15.310 17:55:04 -- common/autotest_common.sh@1673 -- # true 00:08:15.310 17:55:04 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:08:15.310 17:55:04 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:15.310 17:55:04 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:15.310 17:55:04 -- common/autotest_common.sh@27 -- # exec 00:08:15.310 17:55:04 -- common/autotest_common.sh@29 -- # exec 00:08:15.310 17:55:04 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:15.310 17:55:04 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:15.310 17:55:04 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:15.310 17:55:04 -- common/autotest_common.sh@18 -- # set -x 00:08:15.310 17:55:04 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.310 17:55:04 -- nvmf/common.sh@7 -- # uname -s 00:08:15.310 17:55:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.310 17:55:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.310 17:55:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.310 17:55:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.310 17:55:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.310 17:55:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.310 17:55:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.310 17:55:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.310 17:55:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.310 17:55:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.310 17:55:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:15.310 17:55:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:15.310 17:55:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.310 17:55:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.310 17:55:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.310 17:55:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.310 17:55:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.310 17:55:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.310 17:55:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.310 17:55:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.310 17:55:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.310 17:55:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.310 17:55:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.310 17:55:04 -- paths/export.sh@5 -- # export PATH 00:08:15.310 17:55:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.310 17:55:04 -- nvmf/common.sh@47 -- # : 0 00:08:15.310 17:55:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:15.310 17:55:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:15.310 17:55:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.310 17:55:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.310 17:55:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.310 17:55:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:15.310 17:55:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:15.310 17:55:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:15.310 17:55:04 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:15.310 17:55:04 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:15.310 17:55:04 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:15.310 17:55:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:15.310 17:55:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.310 17:55:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:15.310 17:55:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:15.310 17:55:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:15.310 17:55:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.310 17:55:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.310 17:55:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.310 17:55:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:15.310 17:55:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:15.310 17:55:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:15.310 17:55:04 -- common/autotest_common.sh@10 -- # set +x 00:08:17.838 17:55:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:17.838 17:55:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:17.838 17:55:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:17.838 17:55:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:17.838 17:55:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:17.838 17:55:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:17.838 17:55:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:17.838 17:55:06 -- nvmf/common.sh@295 -- # net_devs=() 00:08:17.838 17:55:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:17.838 17:55:06 -- nvmf/common.sh@296 -- # e810=() 00:08:17.838 17:55:06 -- nvmf/common.sh@296 -- # local -ga e810 00:08:17.838 17:55:06 -- nvmf/common.sh@297 -- # x722=() 00:08:17.838 17:55:06 -- nvmf/common.sh@297 -- # local -ga x722 00:08:17.838 17:55:06 -- nvmf/common.sh@298 -- # mlx=() 00:08:17.838 17:55:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:17.838 17:55:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.838 17:55:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.838 17:55:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.838 17:55:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.838 17:55:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.838 17:55:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.838 17:55:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.838 17:55:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.838 17:55:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.838 17:55:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.838 17:55:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.838 17:55:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:17.838 17:55:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:17.838 17:55:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:17.838 17:55:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.838 17:55:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:17.838 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:17.838 17:55:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.838 17:55:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:17.838 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:17.838 17:55:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:17.838 17:55:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.838 17:55:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.838 17:55:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:17.838 17:55:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.838 17:55:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:17.838 Found net devices under 0000:84:00.0: cvl_0_0 00:08:17.838 17:55:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.838 17:55:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.838 17:55:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.838 17:55:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:17.838 17:55:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.838 17:55:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:17.838 Found net devices under 0000:84:00.1: cvl_0_1 00:08:17.838 17:55:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.838 17:55:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:17.838 17:55:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:17.838 17:55:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:17.838 17:55:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.838 17:55:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.838 17:55:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.838 17:55:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:17.838 17:55:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.838 17:55:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.838 17:55:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:17.838 17:55:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.838 17:55:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.838 17:55:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:17.838 17:55:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:17.838 17:55:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.838 17:55:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.838 17:55:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.838 17:55:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.838 17:55:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:17.838 17:55:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.838 17:55:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.838 17:55:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.838 17:55:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:17.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:08:17.838 00:08:17.838 --- 10.0.0.2 ping statistics --- 00:08:17.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.838 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:08:17.838 17:55:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:08:17.838 00:08:17.838 --- 10.0.0.1 ping statistics --- 00:08:17.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.838 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:08:17.838 17:55:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.838 17:55:06 -- nvmf/common.sh@411 -- # return 0 00:08:17.838 17:55:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:17.838 17:55:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.838 17:55:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:17.838 17:55:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.838 17:55:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:17.838 17:55:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:17.838 17:55:06 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:17.838 17:55:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:17.838 17:55:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.838 17:55:06 -- common/autotest_common.sh@10 -- # set +x 00:08:17.838 ************************************ 00:08:17.838 START TEST nvmf_filesystem_no_in_capsule 00:08:17.838 ************************************ 00:08:17.838 17:55:06 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:08:17.838 17:55:06 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:17.838 17:55:06 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:17.838 17:55:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:17.838 17:55:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:17.838 17:55:06 -- common/autotest_common.sh@10 -- # set +x 00:08:17.838 17:55:06 -- nvmf/common.sh@470 -- # nvmfpid=3219296 00:08:17.838 17:55:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.838 17:55:06 -- nvmf/common.sh@471 -- # waitforlisten 3219296 00:08:17.838 17:55:06 -- common/autotest_common.sh@817 -- # '[' -z 3219296 ']' 00:08:17.839 17:55:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.839 17:55:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:17.839 17:55:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.839 17:55:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:17.839 17:55:06 -- common/autotest_common.sh@10 -- # set +x 00:08:17.839 [2024-04-15 17:55:06.784193] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:08:17.839 [2024-04-15 17:55:06.784358] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.097 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.097 [2024-04-15 17:55:06.882289] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.097 [2024-04-15 17:55:06.977427] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.097 [2024-04-15 17:55:06.977493] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.097 [2024-04-15 17:55:06.977510] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.097 [2024-04-15 17:55:06.977525] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.097 [2024-04-15 17:55:06.977537] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.097 [2024-04-15 17:55:06.977616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.097 [2024-04-15 17:55:06.977669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.097 [2024-04-15 17:55:06.977738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.097 [2024-04-15 17:55:06.977741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.051 17:55:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:19.051 17:55:07 -- common/autotest_common.sh@850 -- # return 0 00:08:19.051 17:55:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:19.051 17:55:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:19.051 17:55:07 -- common/autotest_common.sh@10 -- # set +x 00:08:19.051 17:55:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.051 17:55:07 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:19.051 17:55:07 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:19.051 17:55:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.051 17:55:07 -- common/autotest_common.sh@10 -- # set +x 00:08:19.051 [2024-04-15 17:55:07.911595] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.051 17:55:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.051 17:55:07 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:19.051 17:55:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.051 17:55:07 -- common/autotest_common.sh@10 -- # set +x 00:08:19.328 Malloc1 00:08:19.328 17:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.328 17:55:08 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:19.328 17:55:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.328 17:55:08 -- common/autotest_common.sh@10 -- # set +x 00:08:19.328 17:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.328 17:55:08 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:19.328 17:55:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.328 17:55:08 -- common/autotest_common.sh@10 -- # set +x 00:08:19.328 17:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.328 17:55:08 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.328 17:55:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.328 17:55:08 -- common/autotest_common.sh@10 -- # set +x 00:08:19.328 [2024-04-15 17:55:08.094888] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.328 17:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.328 17:55:08 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:19.328 17:55:08 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:08:19.328 17:55:08 -- common/autotest_common.sh@1365 -- # local bdev_info 00:08:19.328 17:55:08 -- common/autotest_common.sh@1366 -- # local bs 00:08:19.328 17:55:08 -- common/autotest_common.sh@1367 -- # local nb 00:08:19.328 17:55:08 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:19.328 17:55:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.328 17:55:08 -- common/autotest_common.sh@10 -- # set +x 00:08:19.328 17:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.328 17:55:08 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:08:19.328 { 00:08:19.328 "name": "Malloc1", 00:08:19.328 "aliases": [ 00:08:19.328 "a5cfcc25-83d6-4e0d-8075-49394c51c530" 00:08:19.328 ], 00:08:19.328 "product_name": "Malloc disk", 00:08:19.328 "block_size": 512, 00:08:19.328 "num_blocks": 1048576, 00:08:19.328 "uuid": "a5cfcc25-83d6-4e0d-8075-49394c51c530", 00:08:19.328 "assigned_rate_limits": { 00:08:19.328 "rw_ios_per_sec": 0, 00:08:19.328 "rw_mbytes_per_sec": 0, 00:08:19.328 "r_mbytes_per_sec": 0, 00:08:19.328 "w_mbytes_per_sec": 0 00:08:19.328 }, 00:08:19.328 "claimed": true, 00:08:19.328 "claim_type": "exclusive_write", 00:08:19.328 "zoned": false, 00:08:19.328 "supported_io_types": { 00:08:19.328 "read": true, 00:08:19.328 "write": true, 00:08:19.328 "unmap": true, 00:08:19.328 "write_zeroes": true, 00:08:19.328 "flush": true, 00:08:19.328 "reset": true, 00:08:19.328 "compare": false, 00:08:19.328 "compare_and_write": false, 00:08:19.328 "abort": true, 00:08:19.328 "nvme_admin": false, 00:08:19.328 "nvme_io": false 00:08:19.328 }, 00:08:19.328 "memory_domains": [ 00:08:19.328 { 00:08:19.328 "dma_device_id": "system", 00:08:19.328 "dma_device_type": 1 00:08:19.328 }, 00:08:19.328 { 00:08:19.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.328 "dma_device_type": 2 00:08:19.328 } 00:08:19.328 ], 00:08:19.328 "driver_specific": {} 00:08:19.328 } 00:08:19.328 ]' 00:08:19.328 17:55:08 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:08:19.328 17:55:08 -- common/autotest_common.sh@1369 -- # bs=512 00:08:19.328 17:55:08 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:08:19.328 17:55:08 -- common/autotest_common.sh@1370 -- # nb=1048576 00:08:19.328 17:55:08 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:08:19.328 17:55:08 -- common/autotest_common.sh@1374 -- # echo 512 00:08:19.328 17:55:08 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:19.329 17:55:08 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:19.895 17:55:08 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:19.895 17:55:08 -- common/autotest_common.sh@1184 -- # local i=0 00:08:19.895 17:55:08 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:19.895 17:55:08 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:19.895 17:55:08 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:22.431 17:55:10 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:22.431 17:55:10 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:22.431 17:55:10 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:22.431 17:55:10 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:22.431 17:55:10 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:22.431 17:55:10 -- common/autotest_common.sh@1194 -- # return 0 00:08:22.431 17:55:10 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:22.431 17:55:10 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:22.431 17:55:10 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:22.431 17:55:10 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:22.431 17:55:10 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:22.431 17:55:10 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:22.431 17:55:10 -- setup/common.sh@80 -- # echo 536870912 00:08:22.431 17:55:10 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:22.432 17:55:10 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:22.432 17:55:10 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:22.432 17:55:10 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:22.432 17:55:11 -- target/filesystem.sh@69 -- # partprobe 00:08:22.690 17:55:11 -- target/filesystem.sh@70 -- # sleep 1 00:08:23.625 17:55:12 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:23.625 17:55:12 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:23.625 17:55:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:23.625 17:55:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.625 17:55:12 -- common/autotest_common.sh@10 -- # set +x 00:08:23.625 ************************************ 00:08:23.625 START TEST filesystem_ext4 00:08:23.625 ************************************ 00:08:23.625 17:55:12 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:23.625 17:55:12 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:23.625 17:55:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:23.625 17:55:12 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:23.625 17:55:12 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:23.625 17:55:12 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:23.625 17:55:12 -- common/autotest_common.sh@914 -- # local i=0 00:08:23.884 17:55:12 -- common/autotest_common.sh@915 -- # local force 00:08:23.884 17:55:12 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:23.884 17:55:12 -- common/autotest_common.sh@918 -- # force=-F 00:08:23.884 17:55:12 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:23.884 mke2fs 1.46.5 (30-Dec-2021) 00:08:23.884 Discarding device blocks: 0/522240 done 00:08:23.884 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:23.884 Filesystem UUID: c184211d-442b-4454-a3ba-244631ae0a14 00:08:23.884 Superblock backups stored on blocks: 00:08:23.884 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:23.884 00:08:23.884 Allocating group tables: 0/64 done 00:08:23.884 Writing inode tables: 0/64 done 00:08:26.673 Creating journal (8192 blocks): done 00:08:27.497 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:08:27.497 00:08:27.497 17:55:16 -- common/autotest_common.sh@931 -- # return 0 00:08:27.497 17:55:16 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:27.756 17:55:16 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:27.756 17:55:16 -- target/filesystem.sh@25 -- # sync 00:08:27.756 17:55:16 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:27.756 17:55:16 -- target/filesystem.sh@27 -- # sync 00:08:27.756 17:55:16 -- target/filesystem.sh@29 -- # i=0 00:08:27.756 17:55:16 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:27.756 17:55:16 -- target/filesystem.sh@37 -- # kill -0 3219296 00:08:27.756 17:55:16 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:27.756 17:55:16 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:27.756 17:55:16 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:27.756 17:55:16 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:27.756 00:08:27.756 real 0m3.961s 00:08:27.756 user 0m0.020s 00:08:27.756 sys 0m0.030s 00:08:27.756 17:55:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:27.756 17:55:16 -- common/autotest_common.sh@10 -- # set +x 00:08:27.756 ************************************ 00:08:27.756 END TEST filesystem_ext4 00:08:27.756 ************************************ 00:08:27.756 17:55:16 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:27.756 17:55:16 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:27.756 17:55:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.756 17:55:16 -- common/autotest_common.sh@10 -- # set +x 00:08:27.756 ************************************ 00:08:27.756 START TEST filesystem_btrfs 00:08:27.756 ************************************ 00:08:27.756 17:55:16 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:27.756 17:55:16 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:27.756 17:55:16 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:27.756 17:55:16 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:27.756 17:55:16 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:27.756 17:55:16 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:27.756 17:55:16 -- common/autotest_common.sh@914 -- # local i=0 00:08:27.756 17:55:16 -- common/autotest_common.sh@915 -- # local force 00:08:27.756 17:55:16 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:27.756 17:55:16 -- common/autotest_common.sh@920 -- # force=-f 00:08:27.756 17:55:16 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:28.014 btrfs-progs v6.6.2 00:08:28.014 See https://btrfs.readthedocs.io for more information. 00:08:28.014 00:08:28.014 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:28.014 NOTE: several default settings have changed in version 5.15, please make sure 00:08:28.014 this does not affect your deployments: 00:08:28.014 - DUP for metadata (-m dup) 00:08:28.014 - enabled no-holes (-O no-holes) 00:08:28.014 - enabled free-space-tree (-R free-space-tree) 00:08:28.014 00:08:28.014 Label: (null) 00:08:28.014 UUID: ff06a577-9a8a-4171-b847-53932e921236 00:08:28.014 Node size: 16384 00:08:28.014 Sector size: 4096 00:08:28.014 Filesystem size: 510.00MiB 00:08:28.015 Block group profiles: 00:08:28.015 Data: single 8.00MiB 00:08:28.015 Metadata: DUP 32.00MiB 00:08:28.015 System: DUP 8.00MiB 00:08:28.015 SSD detected: yes 00:08:28.015 Zoned device: no 00:08:28.015 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:28.015 Runtime features: free-space-tree 00:08:28.015 Checksum: crc32c 00:08:28.015 Number of devices: 1 00:08:28.015 Devices: 00:08:28.015 ID SIZE PATH 00:08:28.015 1 510.00MiB /dev/nvme0n1p1 00:08:28.015 00:08:28.015 17:55:16 -- common/autotest_common.sh@931 -- # return 0 00:08:28.015 17:55:16 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:28.274 17:55:17 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:28.274 17:55:17 -- target/filesystem.sh@25 -- # sync 00:08:28.274 17:55:17 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:28.274 17:55:17 -- target/filesystem.sh@27 -- # sync 00:08:28.274 17:55:17 -- target/filesystem.sh@29 -- # i=0 00:08:28.274 17:55:17 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:28.274 17:55:17 -- target/filesystem.sh@37 -- # kill -0 3219296 00:08:28.274 17:55:17 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:28.274 17:55:17 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:28.274 17:55:17 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:28.274 17:55:17 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:28.274 00:08:28.274 real 0m0.459s 00:08:28.274 user 0m0.014s 00:08:28.274 sys 0m0.040s 00:08:28.274 17:55:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:28.274 17:55:17 -- common/autotest_common.sh@10 -- # set +x 00:08:28.274 ************************************ 00:08:28.274 END TEST filesystem_btrfs 00:08:28.274 ************************************ 00:08:28.274 17:55:17 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:28.274 17:55:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:28.274 17:55:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:28.274 17:55:17 -- common/autotest_common.sh@10 -- # set +x 00:08:28.533 ************************************ 00:08:28.533 START TEST filesystem_xfs 00:08:28.533 ************************************ 00:08:28.533 17:55:17 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:08:28.533 17:55:17 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:28.533 17:55:17 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:28.533 17:55:17 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:28.533 17:55:17 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:28.533 17:55:17 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:28.533 17:55:17 -- common/autotest_common.sh@914 -- # local i=0 00:08:28.533 17:55:17 -- common/autotest_common.sh@915 -- # local force 00:08:28.533 17:55:17 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:28.533 17:55:17 -- common/autotest_common.sh@920 -- # force=-f 00:08:28.533 17:55:17 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:28.533 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:28.533 = sectsz=512 attr=2, projid32bit=1 00:08:28.533 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:28.533 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:28.533 data = bsize=4096 blocks=130560, imaxpct=25 00:08:28.533 = sunit=0 swidth=0 blks 00:08:28.533 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:28.533 log =internal log bsize=4096 blocks=16384, version=2 00:08:28.533 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:28.533 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:29.472 Discarding blocks...Done. 00:08:29.472 17:55:18 -- common/autotest_common.sh@931 -- # return 0 00:08:29.472 17:55:18 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:32.009 17:55:20 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:32.009 17:55:20 -- target/filesystem.sh@25 -- # sync 00:08:32.009 17:55:20 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:32.009 17:55:20 -- target/filesystem.sh@27 -- # sync 00:08:32.009 17:55:20 -- target/filesystem.sh@29 -- # i=0 00:08:32.009 17:55:20 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:32.009 17:55:20 -- target/filesystem.sh@37 -- # kill -0 3219296 00:08:32.009 17:55:20 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:32.009 17:55:20 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:32.009 17:55:20 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:32.009 17:55:20 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:32.009 00:08:32.009 real 0m3.496s 00:08:32.009 user 0m0.013s 00:08:32.009 sys 0m0.035s 00:08:32.009 17:55:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:32.009 17:55:20 -- common/autotest_common.sh@10 -- # set +x 00:08:32.009 ************************************ 00:08:32.009 END TEST filesystem_xfs 00:08:32.009 ************************************ 00:08:32.009 17:55:20 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:32.268 17:55:21 -- target/filesystem.sh@93 -- # sync 00:08:32.268 17:55:21 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:32.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.268 17:55:21 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:32.268 17:55:21 -- common/autotest_common.sh@1205 -- # local i=0 00:08:32.268 17:55:21 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:32.268 17:55:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:32.268 17:55:21 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:32.268 17:55:21 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:32.268 17:55:21 -- common/autotest_common.sh@1217 -- # return 0 00:08:32.268 17:55:21 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:32.268 17:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.268 17:55:21 -- common/autotest_common.sh@10 -- # set +x 00:08:32.268 17:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.268 17:55:21 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:32.268 17:55:21 -- target/filesystem.sh@101 -- # killprocess 3219296 00:08:32.268 17:55:21 -- common/autotest_common.sh@936 -- # '[' -z 3219296 ']' 00:08:32.268 17:55:21 -- common/autotest_common.sh@940 -- # kill -0 3219296 00:08:32.268 17:55:21 -- common/autotest_common.sh@941 -- # uname 00:08:32.268 17:55:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:32.268 17:55:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3219296 00:08:32.268 17:55:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:32.268 17:55:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:32.268 17:55:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3219296' 00:08:32.268 killing process with pid 3219296 00:08:32.268 17:55:21 -- common/autotest_common.sh@955 -- # kill 3219296 00:08:32.268 17:55:21 -- common/autotest_common.sh@960 -- # wait 3219296 00:08:32.836 17:55:21 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:32.836 00:08:32.836 real 0m14.921s 00:08:32.836 user 0m57.664s 00:08:32.836 sys 0m2.193s 00:08:32.836 17:55:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:32.836 17:55:21 -- common/autotest_common.sh@10 -- # set +x 00:08:32.836 ************************************ 00:08:32.836 END TEST nvmf_filesystem_no_in_capsule 00:08:32.836 ************************************ 00:08:32.836 17:55:21 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:32.836 17:55:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:32.836 17:55:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.836 17:55:21 -- common/autotest_common.sh@10 -- # set +x 00:08:32.836 ************************************ 00:08:32.836 START TEST nvmf_filesystem_in_capsule 00:08:32.836 ************************************ 00:08:32.836 17:55:21 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:08:32.836 17:55:21 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:32.836 17:55:21 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:32.836 17:55:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:32.836 17:55:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:32.836 17:55:21 -- common/autotest_common.sh@10 -- # set +x 00:08:33.095 17:55:21 -- nvmf/common.sh@470 -- # nvmfpid=3221310 00:08:33.095 17:55:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:33.095 17:55:21 -- nvmf/common.sh@471 -- # waitforlisten 3221310 00:08:33.095 17:55:21 -- common/autotest_common.sh@817 -- # '[' -z 3221310 ']' 00:08:33.095 17:55:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.095 17:55:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:33.095 17:55:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.095 17:55:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:33.095 17:55:21 -- common/autotest_common.sh@10 -- # set +x 00:08:33.095 [2024-04-15 17:55:21.838139] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:08:33.095 [2024-04-15 17:55:21.838231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.095 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.095 [2024-04-15 17:55:21.915515] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.095 [2024-04-15 17:55:22.016291] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.095 [2024-04-15 17:55:22.016367] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.095 [2024-04-15 17:55:22.016392] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.095 [2024-04-15 17:55:22.016407] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.095 [2024-04-15 17:55:22.016421] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.095 [2024-04-15 17:55:22.016481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.095 [2024-04-15 17:55:22.016538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.095 [2024-04-15 17:55:22.016595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.095 [2024-04-15 17:55:22.016592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.354 17:55:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:33.354 17:55:22 -- common/autotest_common.sh@850 -- # return 0 00:08:33.354 17:55:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:33.354 17:55:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:33.354 17:55:22 -- common/autotest_common.sh@10 -- # set +x 00:08:33.354 17:55:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.354 17:55:22 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:33.354 17:55:22 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:33.354 17:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.354 17:55:22 -- common/autotest_common.sh@10 -- # set +x 00:08:33.354 [2024-04-15 17:55:22.287464] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.354 17:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.354 17:55:22 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:33.354 17:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.354 17:55:22 -- common/autotest_common.sh@10 -- # set +x 00:08:33.611 Malloc1 00:08:33.611 17:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.611 17:55:22 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:33.611 17:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.611 17:55:22 -- common/autotest_common.sh@10 -- # set +x 00:08:33.611 17:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.611 17:55:22 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:33.611 17:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.611 17:55:22 -- common/autotest_common.sh@10 -- # set +x 00:08:33.611 17:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.611 17:55:22 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.611 17:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.611 17:55:22 -- common/autotest_common.sh@10 -- # set +x 00:08:33.611 [2024-04-15 17:55:22.482166] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.611 17:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.611 17:55:22 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:33.611 17:55:22 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:08:33.611 17:55:22 -- common/autotest_common.sh@1365 -- # local bdev_info 00:08:33.611 17:55:22 -- common/autotest_common.sh@1366 -- # local bs 00:08:33.611 17:55:22 -- common/autotest_common.sh@1367 -- # local nb 00:08:33.612 17:55:22 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:33.612 17:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.612 17:55:22 -- common/autotest_common.sh@10 -- # set +x 00:08:33.612 17:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.612 17:55:22 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:08:33.612 { 00:08:33.612 "name": "Malloc1", 00:08:33.612 "aliases": [ 00:08:33.612 "35cf3733-d857-42e7-8dd9-f373926f8d3a" 00:08:33.612 ], 00:08:33.612 "product_name": "Malloc disk", 00:08:33.612 "block_size": 512, 00:08:33.612 "num_blocks": 1048576, 00:08:33.612 "uuid": "35cf3733-d857-42e7-8dd9-f373926f8d3a", 00:08:33.612 "assigned_rate_limits": { 00:08:33.612 "rw_ios_per_sec": 0, 00:08:33.612 "rw_mbytes_per_sec": 0, 00:08:33.612 "r_mbytes_per_sec": 0, 00:08:33.612 "w_mbytes_per_sec": 0 00:08:33.612 }, 00:08:33.612 "claimed": true, 00:08:33.612 "claim_type": "exclusive_write", 00:08:33.612 "zoned": false, 00:08:33.612 "supported_io_types": { 00:08:33.612 "read": true, 00:08:33.612 "write": true, 00:08:33.612 "unmap": true, 00:08:33.612 "write_zeroes": true, 00:08:33.612 "flush": true, 00:08:33.612 "reset": true, 00:08:33.612 "compare": false, 00:08:33.612 "compare_and_write": false, 00:08:33.612 "abort": true, 00:08:33.612 "nvme_admin": false, 00:08:33.612 "nvme_io": false 00:08:33.612 }, 00:08:33.612 "memory_domains": [ 00:08:33.612 { 00:08:33.612 "dma_device_id": "system", 00:08:33.612 "dma_device_type": 1 00:08:33.612 }, 00:08:33.612 { 00:08:33.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.612 "dma_device_type": 2 00:08:33.612 } 00:08:33.612 ], 00:08:33.612 "driver_specific": {} 00:08:33.612 } 00:08:33.612 ]' 00:08:33.612 17:55:22 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:08:33.612 17:55:22 -- common/autotest_common.sh@1369 -- # bs=512 00:08:33.612 17:55:22 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:08:33.870 17:55:22 -- common/autotest_common.sh@1370 -- # nb=1048576 00:08:33.870 17:55:22 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:08:33.870 17:55:22 -- common/autotest_common.sh@1374 -- # echo 512 00:08:33.870 17:55:22 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:33.870 17:55:22 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:34.439 17:55:23 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:34.439 17:55:23 -- common/autotest_common.sh@1184 -- # local i=0 00:08:34.439 17:55:23 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:34.439 17:55:23 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:34.439 17:55:23 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:36.343 17:55:25 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:36.343 17:55:25 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:36.343 17:55:25 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:36.343 17:55:25 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:36.343 17:55:25 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:36.343 17:55:25 -- common/autotest_common.sh@1194 -- # return 0 00:08:36.343 17:55:25 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:36.343 17:55:25 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:36.343 17:55:25 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:36.343 17:55:25 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:36.343 17:55:25 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:36.343 17:55:25 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:36.343 17:55:25 -- setup/common.sh@80 -- # echo 536870912 00:08:36.343 17:55:25 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:36.343 17:55:25 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:36.343 17:55:25 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:36.343 17:55:25 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:36.601 17:55:25 -- target/filesystem.sh@69 -- # partprobe 00:08:36.890 17:55:25 -- target/filesystem.sh@70 -- # sleep 1 00:08:38.269 17:55:26 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:38.269 17:55:26 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:38.269 17:55:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:38.269 17:55:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.269 17:55:26 -- common/autotest_common.sh@10 -- # set +x 00:08:38.269 ************************************ 00:08:38.269 START TEST filesystem_in_capsule_ext4 00:08:38.269 ************************************ 00:08:38.269 17:55:26 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:38.269 17:55:26 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:38.269 17:55:26 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:38.269 17:55:26 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:38.269 17:55:26 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:38.269 17:55:26 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:38.269 17:55:26 -- common/autotest_common.sh@914 -- # local i=0 00:08:38.269 17:55:26 -- common/autotest_common.sh@915 -- # local force 00:08:38.269 17:55:26 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:38.269 17:55:26 -- common/autotest_common.sh@918 -- # force=-F 00:08:38.269 17:55:26 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:38.269 mke2fs 1.46.5 (30-Dec-2021) 00:08:38.269 Discarding device blocks: 0/522240 done 00:08:38.269 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:38.270 Filesystem UUID: d06c38d0-3b88-4870-9d0c-e2f4102245dd 00:08:38.270 Superblock backups stored on blocks: 00:08:38.270 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:38.270 00:08:38.270 Allocating group tables: 0/64 done 00:08:38.270 Writing inode tables: 0/64 done 00:08:38.836 Creating journal (8192 blocks): done 00:08:38.836 Writing superblocks and filesystem accounting information: 0/64 done 00:08:38.836 00:08:38.836 17:55:27 -- common/autotest_common.sh@931 -- # return 0 00:08:38.836 17:55:27 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:39.771 17:55:28 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:39.771 17:55:28 -- target/filesystem.sh@25 -- # sync 00:08:39.771 17:55:28 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:39.771 17:55:28 -- target/filesystem.sh@27 -- # sync 00:08:39.771 17:55:28 -- target/filesystem.sh@29 -- # i=0 00:08:39.771 17:55:28 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:39.771 17:55:28 -- target/filesystem.sh@37 -- # kill -0 3221310 00:08:39.771 17:55:28 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:39.771 17:55:28 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:39.771 17:55:28 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:39.771 17:55:28 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:39.771 00:08:39.771 real 0m1.567s 00:08:39.771 user 0m0.019s 00:08:39.771 sys 0m0.041s 00:08:39.771 17:55:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:39.771 17:55:28 -- common/autotest_common.sh@10 -- # set +x 00:08:39.771 ************************************ 00:08:39.771 END TEST filesystem_in_capsule_ext4 00:08:39.771 ************************************ 00:08:39.771 17:55:28 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:39.771 17:55:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:39.771 17:55:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:39.771 17:55:28 -- common/autotest_common.sh@10 -- # set +x 00:08:39.771 ************************************ 00:08:39.771 START TEST filesystem_in_capsule_btrfs 00:08:39.771 ************************************ 00:08:39.771 17:55:28 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:39.771 17:55:28 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:39.771 17:55:28 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:39.771 17:55:28 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:39.771 17:55:28 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:39.771 17:55:28 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:39.771 17:55:28 -- common/autotest_common.sh@914 -- # local i=0 00:08:39.771 17:55:28 -- common/autotest_common.sh@915 -- # local force 00:08:39.771 17:55:28 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:39.771 17:55:28 -- common/autotest_common.sh@920 -- # force=-f 00:08:39.771 17:55:28 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:40.339 btrfs-progs v6.6.2 00:08:40.339 See https://btrfs.readthedocs.io for more information. 00:08:40.339 00:08:40.339 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:40.339 NOTE: several default settings have changed in version 5.15, please make sure 00:08:40.339 this does not affect your deployments: 00:08:40.339 - DUP for metadata (-m dup) 00:08:40.339 - enabled no-holes (-O no-holes) 00:08:40.339 - enabled free-space-tree (-R free-space-tree) 00:08:40.339 00:08:40.339 Label: (null) 00:08:40.339 UUID: 919935ef-1c66-450c-bacc-ea447d093f75 00:08:40.339 Node size: 16384 00:08:40.339 Sector size: 4096 00:08:40.339 Filesystem size: 510.00MiB 00:08:40.339 Block group profiles: 00:08:40.339 Data: single 8.00MiB 00:08:40.339 Metadata: DUP 32.00MiB 00:08:40.339 System: DUP 8.00MiB 00:08:40.339 SSD detected: yes 00:08:40.339 Zoned device: no 00:08:40.339 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:40.339 Runtime features: free-space-tree 00:08:40.339 Checksum: crc32c 00:08:40.339 Number of devices: 1 00:08:40.339 Devices: 00:08:40.339 ID SIZE PATH 00:08:40.339 1 510.00MiB /dev/nvme0n1p1 00:08:40.340 00:08:40.340 17:55:29 -- common/autotest_common.sh@931 -- # return 0 00:08:40.340 17:55:29 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:40.340 17:55:29 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:40.340 17:55:29 -- target/filesystem.sh@25 -- # sync 00:08:40.340 17:55:29 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:40.340 17:55:29 -- target/filesystem.sh@27 -- # sync 00:08:40.340 17:55:29 -- target/filesystem.sh@29 -- # i=0 00:08:40.340 17:55:29 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:40.340 17:55:29 -- target/filesystem.sh@37 -- # kill -0 3221310 00:08:40.340 17:55:29 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:40.340 17:55:29 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:40.598 17:55:29 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:40.598 17:55:29 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:40.598 00:08:40.598 real 0m0.613s 00:08:40.598 user 0m0.014s 00:08:40.598 sys 0m0.047s 00:08:40.598 17:55:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:40.598 17:55:29 -- common/autotest_common.sh@10 -- # set +x 00:08:40.598 ************************************ 00:08:40.598 END TEST filesystem_in_capsule_btrfs 00:08:40.598 ************************************ 00:08:40.598 17:55:29 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:40.598 17:55:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:40.598 17:55:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:40.598 17:55:29 -- common/autotest_common.sh@10 -- # set +x 00:08:40.598 ************************************ 00:08:40.598 START TEST filesystem_in_capsule_xfs 00:08:40.598 ************************************ 00:08:40.598 17:55:29 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:08:40.598 17:55:29 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:40.598 17:55:29 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:40.598 17:55:29 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:40.598 17:55:29 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:40.598 17:55:29 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:40.598 17:55:29 -- common/autotest_common.sh@914 -- # local i=0 00:08:40.598 17:55:29 -- common/autotest_common.sh@915 -- # local force 00:08:40.598 17:55:29 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:40.598 17:55:29 -- common/autotest_common.sh@920 -- # force=-f 00:08:40.598 17:55:29 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:40.857 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:40.858 = sectsz=512 attr=2, projid32bit=1 00:08:40.858 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:40.858 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:40.858 data = bsize=4096 blocks=130560, imaxpct=25 00:08:40.858 = sunit=0 swidth=0 blks 00:08:40.858 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:40.858 log =internal log bsize=4096 blocks=16384, version=2 00:08:40.858 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:40.858 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:41.795 Discarding blocks...Done. 00:08:41.795 17:55:30 -- common/autotest_common.sh@931 -- # return 0 00:08:41.795 17:55:30 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:44.327 17:55:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:44.327 17:55:32 -- target/filesystem.sh@25 -- # sync 00:08:44.327 17:55:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:44.327 17:55:33 -- target/filesystem.sh@27 -- # sync 00:08:44.327 17:55:33 -- target/filesystem.sh@29 -- # i=0 00:08:44.327 17:55:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:44.327 17:55:33 -- target/filesystem.sh@37 -- # kill -0 3221310 00:08:44.327 17:55:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:44.327 17:55:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:44.327 17:55:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:44.327 17:55:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:44.327 00:08:44.327 real 0m3.588s 00:08:44.327 user 0m0.018s 00:08:44.327 sys 0m0.046s 00:08:44.327 17:55:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:44.327 17:55:33 -- common/autotest_common.sh@10 -- # set +x 00:08:44.327 ************************************ 00:08:44.327 END TEST filesystem_in_capsule_xfs 00:08:44.327 ************************************ 00:08:44.327 17:55:33 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:44.327 17:55:33 -- target/filesystem.sh@93 -- # sync 00:08:44.327 17:55:33 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:44.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.327 17:55:33 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:44.327 17:55:33 -- common/autotest_common.sh@1205 -- # local i=0 00:08:44.327 17:55:33 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:44.327 17:55:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.327 17:55:33 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:44.327 17:55:33 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.327 17:55:33 -- common/autotest_common.sh@1217 -- # return 0 00:08:44.327 17:55:33 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:44.327 17:55:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:44.327 17:55:33 -- common/autotest_common.sh@10 -- # set +x 00:08:44.327 17:55:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:44.327 17:55:33 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:44.327 17:55:33 -- target/filesystem.sh@101 -- # killprocess 3221310 00:08:44.327 17:55:33 -- common/autotest_common.sh@936 -- # '[' -z 3221310 ']' 00:08:44.327 17:55:33 -- common/autotest_common.sh@940 -- # kill -0 3221310 00:08:44.327 17:55:33 -- common/autotest_common.sh@941 -- # uname 00:08:44.327 17:55:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:44.327 17:55:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3221310 00:08:44.586 17:55:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:44.586 17:55:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:44.586 17:55:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3221310' 00:08:44.586 killing process with pid 3221310 00:08:44.586 17:55:33 -- common/autotest_common.sh@955 -- # kill 3221310 00:08:44.586 17:55:33 -- common/autotest_common.sh@960 -- # wait 3221310 00:08:44.844 17:55:33 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:44.844 00:08:44.844 real 0m11.960s 00:08:44.844 user 0m46.105s 00:08:44.844 sys 0m1.965s 00:08:44.844 17:55:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:44.844 17:55:33 -- common/autotest_common.sh@10 -- # set +x 00:08:44.844 ************************************ 00:08:44.844 END TEST nvmf_filesystem_in_capsule 00:08:44.844 ************************************ 00:08:44.844 17:55:33 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:44.844 17:55:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:44.844 17:55:33 -- nvmf/common.sh@117 -- # sync 00:08:44.844 17:55:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:44.844 17:55:33 -- nvmf/common.sh@120 -- # set +e 00:08:44.844 17:55:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:44.844 17:55:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:44.844 rmmod nvme_tcp 00:08:44.844 rmmod nvme_fabrics 00:08:45.104 rmmod nvme_keyring 00:08:45.104 17:55:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.104 17:55:33 -- nvmf/common.sh@124 -- # set -e 00:08:45.104 17:55:33 -- nvmf/common.sh@125 -- # return 0 00:08:45.104 17:55:33 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:08:45.104 17:55:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:45.104 17:55:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:45.104 17:55:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:45.104 17:55:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.104 17:55:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:45.104 17:55:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.104 17:55:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.104 17:55:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.006 17:55:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:47.006 00:08:47.006 real 0m31.821s 00:08:47.006 user 1m44.764s 00:08:47.006 sys 0m6.080s 00:08:47.006 17:55:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:47.006 17:55:35 -- common/autotest_common.sh@10 -- # set +x 00:08:47.006 ************************************ 00:08:47.006 END TEST nvmf_filesystem 00:08:47.006 ************************************ 00:08:47.006 17:55:35 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:47.006 17:55:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:47.006 17:55:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:47.006 17:55:35 -- common/autotest_common.sh@10 -- # set +x 00:08:47.264 ************************************ 00:08:47.264 START TEST nvmf_discovery 00:08:47.264 ************************************ 00:08:47.264 17:55:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:47.264 * Looking for test storage... 00:08:47.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.264 17:55:36 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.264 17:55:36 -- nvmf/common.sh@7 -- # uname -s 00:08:47.264 17:55:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.264 17:55:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.264 17:55:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.264 17:55:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.264 17:55:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.264 17:55:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.264 17:55:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.264 17:55:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.264 17:55:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.264 17:55:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.264 17:55:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:47.264 17:55:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:47.264 17:55:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.264 17:55:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.264 17:55:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.264 17:55:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.264 17:55:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.264 17:55:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.264 17:55:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.264 17:55:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.264 17:55:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.264 17:55:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.265 17:55:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.265 17:55:36 -- paths/export.sh@5 -- # export PATH 00:08:47.265 17:55:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.265 17:55:36 -- nvmf/common.sh@47 -- # : 0 00:08:47.265 17:55:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:47.265 17:55:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:47.265 17:55:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.265 17:55:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.265 17:55:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.265 17:55:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:47.265 17:55:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:47.265 17:55:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:47.265 17:55:36 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:47.265 17:55:36 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:47.265 17:55:36 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:47.265 17:55:36 -- target/discovery.sh@15 -- # hash nvme 00:08:47.265 17:55:36 -- target/discovery.sh@20 -- # nvmftestinit 00:08:47.265 17:55:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:47.265 17:55:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.265 17:55:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:47.265 17:55:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:47.265 17:55:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:47.265 17:55:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.265 17:55:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:47.265 17:55:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.265 17:55:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:47.265 17:55:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:47.265 17:55:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:47.265 17:55:36 -- common/autotest_common.sh@10 -- # set +x 00:08:49.801 17:55:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:49.801 17:55:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:49.801 17:55:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:49.801 17:55:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:49.801 17:55:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:49.801 17:55:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:49.801 17:55:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:49.801 17:55:38 -- nvmf/common.sh@295 -- # net_devs=() 00:08:49.801 17:55:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:49.801 17:55:38 -- nvmf/common.sh@296 -- # e810=() 00:08:49.801 17:55:38 -- nvmf/common.sh@296 -- # local -ga e810 00:08:49.801 17:55:38 -- nvmf/common.sh@297 -- # x722=() 00:08:49.801 17:55:38 -- nvmf/common.sh@297 -- # local -ga x722 00:08:49.801 17:55:38 -- nvmf/common.sh@298 -- # mlx=() 00:08:49.801 17:55:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:49.801 17:55:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.801 17:55:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.801 17:55:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.801 17:55:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.801 17:55:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.801 17:55:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.801 17:55:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.801 17:55:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.801 17:55:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.801 17:55:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.801 17:55:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.801 17:55:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:49.801 17:55:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:49.801 17:55:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:49.801 17:55:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:49.801 17:55:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:49.801 17:55:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:49.801 17:55:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.801 17:55:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:49.801 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:49.801 17:55:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.801 17:55:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.801 17:55:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.801 17:55:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.801 17:55:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.801 17:55:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.801 17:55:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:49.801 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:49.801 17:55:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.801 17:55:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.801 17:55:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.801 17:55:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.801 17:55:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.801 17:55:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:49.801 17:55:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:49.801 17:55:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:49.801 17:55:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.802 17:55:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.802 17:55:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:49.802 17:55:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.802 17:55:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:49.802 Found net devices under 0000:84:00.0: cvl_0_0 00:08:49.802 17:55:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.802 17:55:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.802 17:55:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.802 17:55:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:49.802 17:55:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.802 17:55:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:49.802 Found net devices under 0000:84:00.1: cvl_0_1 00:08:49.802 17:55:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.802 17:55:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:49.802 17:55:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:49.802 17:55:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:49.802 17:55:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:49.802 17:55:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:49.802 17:55:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.802 17:55:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.802 17:55:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.802 17:55:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:49.802 17:55:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.802 17:55:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.802 17:55:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:49.802 17:55:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.802 17:55:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.802 17:55:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:49.802 17:55:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:49.802 17:55:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.802 17:55:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.802 17:55:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.802 17:55:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.802 17:55:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:49.802 17:55:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.802 17:55:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.802 17:55:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.802 17:55:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:49.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:08:49.802 00:08:49.802 --- 10.0.0.2 ping statistics --- 00:08:49.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.802 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:08:49.802 17:55:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:08:49.802 00:08:49.802 --- 10.0.0.1 ping statistics --- 00:08:49.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.802 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:08:49.802 17:55:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.802 17:55:38 -- nvmf/common.sh@411 -- # return 0 00:08:49.802 17:55:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:49.802 17:55:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.802 17:55:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:49.802 17:55:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:49.802 17:55:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.802 17:55:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:49.802 17:55:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:49.802 17:55:38 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:49.802 17:55:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:49.802 17:55:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:49.802 17:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:49.802 17:55:38 -- nvmf/common.sh@470 -- # nvmfpid=3224924 00:08:49.802 17:55:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.802 17:55:38 -- nvmf/common.sh@471 -- # waitforlisten 3224924 00:08:49.802 17:55:38 -- common/autotest_common.sh@817 -- # '[' -z 3224924 ']' 00:08:49.802 17:55:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.802 17:55:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:49.802 17:55:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.802 17:55:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:49.802 17:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:49.802 [2024-04-15 17:55:38.637046] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:08:49.802 [2024-04-15 17:55:38.637160] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.802 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.802 [2024-04-15 17:55:38.722734] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.063 [2024-04-15 17:55:38.819346] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.063 [2024-04-15 17:55:38.819412] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.063 [2024-04-15 17:55:38.819429] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.063 [2024-04-15 17:55:38.819444] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.063 [2024-04-15 17:55:38.819456] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.063 [2024-04-15 17:55:38.819552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.063 [2024-04-15 17:55:38.819610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.063 [2024-04-15 17:55:38.819662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.063 [2024-04-15 17:55:38.819665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.323 17:55:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:50.323 17:55:39 -- common/autotest_common.sh@850 -- # return 0 00:08:50.323 17:55:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:50.323 17:55:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:50.323 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.323 17:55:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.323 17:55:39 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:50.323 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.323 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.323 [2024-04-15 17:55:39.117477] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.323 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.323 17:55:39 -- target/discovery.sh@26 -- # seq 1 4 00:08:50.323 17:55:39 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:50.323 17:55:39 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:50.323 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.323 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.323 Null1 00:08:50.323 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.323 17:55:39 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:50.323 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.323 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.323 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.323 17:55:39 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:50.323 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.323 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.323 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.323 17:55:39 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.323 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.323 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.323 [2024-04-15 17:55:39.157770] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.323 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.323 17:55:39 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:50.323 17:55:39 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:50.323 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.323 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.323 Null2 00:08:50.323 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.323 17:55:39 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:50.323 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.323 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.323 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.323 17:55:39 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:50.323 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.324 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.324 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.324 17:55:39 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:50.324 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.324 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.324 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.324 17:55:39 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:50.324 17:55:39 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:50.324 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.324 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.324 Null3 00:08:50.324 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.324 17:55:39 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:50.324 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.324 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.324 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.324 17:55:39 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:50.324 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.324 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.324 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.324 17:55:39 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:50.324 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.324 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.324 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.324 17:55:39 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:50.324 17:55:39 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:50.324 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.324 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.324 Null4 00:08:50.324 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.324 17:55:39 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:50.324 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.324 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.324 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.324 17:55:39 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:50.324 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.324 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.324 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.324 17:55:39 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:50.324 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.324 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.324 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.324 17:55:39 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:50.324 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.324 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.324 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.324 17:55:39 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:50.324 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.324 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.324 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.324 17:55:39 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:08:50.583 00:08:50.583 Discovery Log Number of Records 6, Generation counter 6 00:08:50.583 =====Discovery Log Entry 0====== 00:08:50.583 trtype: tcp 00:08:50.583 adrfam: ipv4 00:08:50.583 subtype: current discovery subsystem 00:08:50.583 treq: not required 00:08:50.583 portid: 0 00:08:50.583 trsvcid: 4420 00:08:50.583 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:50.583 traddr: 10.0.0.2 00:08:50.583 eflags: explicit discovery connections, duplicate discovery information 00:08:50.583 sectype: none 00:08:50.583 =====Discovery Log Entry 1====== 00:08:50.583 trtype: tcp 00:08:50.583 adrfam: ipv4 00:08:50.583 subtype: nvme subsystem 00:08:50.583 treq: not required 00:08:50.583 portid: 0 00:08:50.583 trsvcid: 4420 00:08:50.583 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:50.583 traddr: 10.0.0.2 00:08:50.583 eflags: none 00:08:50.583 sectype: none 00:08:50.583 =====Discovery Log Entry 2====== 00:08:50.583 trtype: tcp 00:08:50.583 adrfam: ipv4 00:08:50.583 subtype: nvme subsystem 00:08:50.583 treq: not required 00:08:50.583 portid: 0 00:08:50.583 trsvcid: 4420 00:08:50.583 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:50.583 traddr: 10.0.0.2 00:08:50.583 eflags: none 00:08:50.583 sectype: none 00:08:50.583 =====Discovery Log Entry 3====== 00:08:50.583 trtype: tcp 00:08:50.583 adrfam: ipv4 00:08:50.583 subtype: nvme subsystem 00:08:50.583 treq: not required 00:08:50.583 portid: 0 00:08:50.583 trsvcid: 4420 00:08:50.583 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:50.583 traddr: 10.0.0.2 00:08:50.583 eflags: none 00:08:50.583 sectype: none 00:08:50.583 =====Discovery Log Entry 4====== 00:08:50.583 trtype: tcp 00:08:50.583 adrfam: ipv4 00:08:50.583 subtype: nvme subsystem 00:08:50.583 treq: not required 00:08:50.583 portid: 0 00:08:50.583 trsvcid: 4420 00:08:50.583 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:50.583 traddr: 10.0.0.2 00:08:50.583 eflags: none 00:08:50.583 sectype: none 00:08:50.583 =====Discovery Log Entry 5====== 00:08:50.583 trtype: tcp 00:08:50.583 adrfam: ipv4 00:08:50.583 subtype: discovery subsystem referral 00:08:50.583 treq: not required 00:08:50.583 portid: 0 00:08:50.583 trsvcid: 4430 00:08:50.583 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:50.583 traddr: 10.0.0.2 00:08:50.583 eflags: none 00:08:50.583 sectype: none 00:08:50.583 17:55:39 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:50.583 Perform nvmf subsystem discovery via RPC 00:08:50.583 17:55:39 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:50.583 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.583 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.583 [2024-04-15 17:55:39.418390] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:50.583 [ 00:08:50.583 { 00:08:50.583 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:50.583 "subtype": "Discovery", 00:08:50.583 "listen_addresses": [ 00:08:50.583 { 00:08:50.583 "transport": "TCP", 00:08:50.583 "trtype": "TCP", 00:08:50.583 "adrfam": "IPv4", 00:08:50.583 "traddr": "10.0.0.2", 00:08:50.583 "trsvcid": "4420" 00:08:50.583 } 00:08:50.583 ], 00:08:50.583 "allow_any_host": true, 00:08:50.583 "hosts": [] 00:08:50.583 }, 00:08:50.583 { 00:08:50.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.583 "subtype": "NVMe", 00:08:50.583 "listen_addresses": [ 00:08:50.583 { 00:08:50.583 "transport": "TCP", 00:08:50.583 "trtype": "TCP", 00:08:50.583 "adrfam": "IPv4", 00:08:50.583 "traddr": "10.0.0.2", 00:08:50.583 "trsvcid": "4420" 00:08:50.583 } 00:08:50.583 ], 00:08:50.583 "allow_any_host": true, 00:08:50.583 "hosts": [], 00:08:50.583 "serial_number": "SPDK00000000000001", 00:08:50.583 "model_number": "SPDK bdev Controller", 00:08:50.583 "max_namespaces": 32, 00:08:50.583 "min_cntlid": 1, 00:08:50.583 "max_cntlid": 65519, 00:08:50.583 "namespaces": [ 00:08:50.583 { 00:08:50.583 "nsid": 1, 00:08:50.583 "bdev_name": "Null1", 00:08:50.583 "name": "Null1", 00:08:50.583 "nguid": "56194DC02A104D1DAADDD0A5503B6E61", 00:08:50.583 "uuid": "56194dc0-2a10-4d1d-aadd-d0a5503b6e61" 00:08:50.583 } 00:08:50.583 ] 00:08:50.583 }, 00:08:50.583 { 00:08:50.583 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:50.583 "subtype": "NVMe", 00:08:50.583 "listen_addresses": [ 00:08:50.583 { 00:08:50.583 "transport": "TCP", 00:08:50.583 "trtype": "TCP", 00:08:50.583 "adrfam": "IPv4", 00:08:50.583 "traddr": "10.0.0.2", 00:08:50.583 "trsvcid": "4420" 00:08:50.583 } 00:08:50.583 ], 00:08:50.583 "allow_any_host": true, 00:08:50.583 "hosts": [], 00:08:50.583 "serial_number": "SPDK00000000000002", 00:08:50.583 "model_number": "SPDK bdev Controller", 00:08:50.583 "max_namespaces": 32, 00:08:50.583 "min_cntlid": 1, 00:08:50.583 "max_cntlid": 65519, 00:08:50.583 "namespaces": [ 00:08:50.583 { 00:08:50.583 "nsid": 1, 00:08:50.583 "bdev_name": "Null2", 00:08:50.583 "name": "Null2", 00:08:50.583 "nguid": "E2A5033B4D5742AAA06C7ACA2EAF7D33", 00:08:50.583 "uuid": "e2a5033b-4d57-42aa-a06c-7aca2eaf7d33" 00:08:50.583 } 00:08:50.583 ] 00:08:50.583 }, 00:08:50.583 { 00:08:50.583 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:50.583 "subtype": "NVMe", 00:08:50.583 "listen_addresses": [ 00:08:50.583 { 00:08:50.583 "transport": "TCP", 00:08:50.583 "trtype": "TCP", 00:08:50.583 "adrfam": "IPv4", 00:08:50.583 "traddr": "10.0.0.2", 00:08:50.583 "trsvcid": "4420" 00:08:50.583 } 00:08:50.583 ], 00:08:50.583 "allow_any_host": true, 00:08:50.583 "hosts": [], 00:08:50.583 "serial_number": "SPDK00000000000003", 00:08:50.583 "model_number": "SPDK bdev Controller", 00:08:50.583 "max_namespaces": 32, 00:08:50.583 "min_cntlid": 1, 00:08:50.583 "max_cntlid": 65519, 00:08:50.583 "namespaces": [ 00:08:50.583 { 00:08:50.583 "nsid": 1, 00:08:50.583 "bdev_name": "Null3", 00:08:50.583 "name": "Null3", 00:08:50.583 "nguid": "3198AE0679A44E35A7FCEE4BB9296F93", 00:08:50.583 "uuid": "3198ae06-79a4-4e35-a7fc-ee4bb9296f93" 00:08:50.583 } 00:08:50.583 ] 00:08:50.583 }, 00:08:50.583 { 00:08:50.583 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:50.583 "subtype": "NVMe", 00:08:50.583 "listen_addresses": [ 00:08:50.583 { 00:08:50.583 "transport": "TCP", 00:08:50.583 "trtype": "TCP", 00:08:50.583 "adrfam": "IPv4", 00:08:50.583 "traddr": "10.0.0.2", 00:08:50.583 "trsvcid": "4420" 00:08:50.583 } 00:08:50.583 ], 00:08:50.583 "allow_any_host": true, 00:08:50.583 "hosts": [], 00:08:50.583 "serial_number": "SPDK00000000000004", 00:08:50.583 "model_number": "SPDK bdev Controller", 00:08:50.583 "max_namespaces": 32, 00:08:50.583 "min_cntlid": 1, 00:08:50.583 "max_cntlid": 65519, 00:08:50.583 "namespaces": [ 00:08:50.583 { 00:08:50.583 "nsid": 1, 00:08:50.583 "bdev_name": "Null4", 00:08:50.583 "name": "Null4", 00:08:50.583 "nguid": "ED34DA52FA5740108C0A3F832953D7CA", 00:08:50.583 "uuid": "ed34da52-fa57-4010-8c0a-3f832953d7ca" 00:08:50.583 } 00:08:50.583 ] 00:08:50.583 } 00:08:50.584 ] 00:08:50.584 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.584 17:55:39 -- target/discovery.sh@42 -- # seq 1 4 00:08:50.584 17:55:39 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:50.584 17:55:39 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:50.584 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.584 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.584 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.584 17:55:39 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:50.584 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.584 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.584 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.584 17:55:39 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:50.584 17:55:39 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:50.584 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.584 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.584 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.584 17:55:39 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:50.584 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.584 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.584 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.584 17:55:39 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:50.584 17:55:39 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:50.584 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.584 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.584 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.584 17:55:39 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:50.584 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.584 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.584 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.584 17:55:39 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:50.584 17:55:39 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:50.584 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.584 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.584 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.584 17:55:39 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:50.584 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.584 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.584 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.584 17:55:39 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:50.584 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.584 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.584 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.584 17:55:39 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:50.584 17:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.584 17:55:39 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:50.584 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.584 17:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.843 17:55:39 -- target/discovery.sh@49 -- # check_bdevs= 00:08:50.843 17:55:39 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:50.843 17:55:39 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:50.843 17:55:39 -- target/discovery.sh@57 -- # nvmftestfini 00:08:50.843 17:55:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:50.843 17:55:39 -- nvmf/common.sh@117 -- # sync 00:08:50.843 17:55:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:50.843 17:55:39 -- nvmf/common.sh@120 -- # set +e 00:08:50.843 17:55:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:50.843 17:55:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:50.843 rmmod nvme_tcp 00:08:50.843 rmmod nvme_fabrics 00:08:50.843 rmmod nvme_keyring 00:08:50.843 17:55:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.843 17:55:39 -- nvmf/common.sh@124 -- # set -e 00:08:50.843 17:55:39 -- nvmf/common.sh@125 -- # return 0 00:08:50.843 17:55:39 -- nvmf/common.sh@478 -- # '[' -n 3224924 ']' 00:08:50.843 17:55:39 -- nvmf/common.sh@479 -- # killprocess 3224924 00:08:50.843 17:55:39 -- common/autotest_common.sh@936 -- # '[' -z 3224924 ']' 00:08:50.843 17:55:39 -- common/autotest_common.sh@940 -- # kill -0 3224924 00:08:50.843 17:55:39 -- common/autotest_common.sh@941 -- # uname 00:08:50.843 17:55:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:50.843 17:55:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3224924 00:08:50.843 17:55:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:50.843 17:55:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:50.843 17:55:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3224924' 00:08:50.843 killing process with pid 3224924 00:08:50.843 17:55:39 -- common/autotest_common.sh@955 -- # kill 3224924 00:08:50.843 [2024-04-15 17:55:39.642359] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:50.843 17:55:39 -- common/autotest_common.sh@960 -- # wait 3224924 00:08:51.103 17:55:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:51.103 17:55:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:51.103 17:55:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:51.103 17:55:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:51.103 17:55:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:51.103 17:55:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.103 17:55:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.103 17:55:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.007 17:55:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:53.007 00:08:53.007 real 0m5.887s 00:08:53.007 user 0m5.176s 00:08:53.007 sys 0m2.211s 00:08:53.007 17:55:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:53.007 17:55:41 -- common/autotest_common.sh@10 -- # set +x 00:08:53.007 ************************************ 00:08:53.007 END TEST nvmf_discovery 00:08:53.007 ************************************ 00:08:53.008 17:55:41 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:53.008 17:55:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:53.008 17:55:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.008 17:55:41 -- common/autotest_common.sh@10 -- # set +x 00:08:53.267 ************************************ 00:08:53.267 START TEST nvmf_referrals 00:08:53.267 ************************************ 00:08:53.267 17:55:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:53.267 * Looking for test storage... 00:08:53.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.267 17:55:42 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.267 17:55:42 -- nvmf/common.sh@7 -- # uname -s 00:08:53.267 17:55:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.267 17:55:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.267 17:55:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.267 17:55:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.267 17:55:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.267 17:55:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.267 17:55:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.267 17:55:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.267 17:55:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.267 17:55:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.267 17:55:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:53.267 17:55:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:53.267 17:55:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.267 17:55:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.267 17:55:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.267 17:55:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.267 17:55:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.267 17:55:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.267 17:55:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.267 17:55:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.267 17:55:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.267 17:55:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.267 17:55:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.267 17:55:42 -- paths/export.sh@5 -- # export PATH 00:08:53.267 17:55:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.267 17:55:42 -- nvmf/common.sh@47 -- # : 0 00:08:53.267 17:55:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:53.267 17:55:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:53.267 17:55:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.267 17:55:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.267 17:55:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.267 17:55:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:53.267 17:55:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:53.267 17:55:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:53.267 17:55:42 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:53.267 17:55:42 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:53.267 17:55:42 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:53.267 17:55:42 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:53.267 17:55:42 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:53.267 17:55:42 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:53.267 17:55:42 -- target/referrals.sh@37 -- # nvmftestinit 00:08:53.267 17:55:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:53.267 17:55:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.267 17:55:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:53.267 17:55:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:53.267 17:55:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:53.267 17:55:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.267 17:55:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.267 17:55:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.267 17:55:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:53.267 17:55:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:53.267 17:55:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:53.267 17:55:42 -- common/autotest_common.sh@10 -- # set +x 00:08:55.823 17:55:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:55.823 17:55:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:55.823 17:55:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:55.823 17:55:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:55.823 17:55:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:55.823 17:55:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:55.823 17:55:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:55.823 17:55:44 -- nvmf/common.sh@295 -- # net_devs=() 00:08:55.823 17:55:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:55.823 17:55:44 -- nvmf/common.sh@296 -- # e810=() 00:08:55.823 17:55:44 -- nvmf/common.sh@296 -- # local -ga e810 00:08:55.823 17:55:44 -- nvmf/common.sh@297 -- # x722=() 00:08:55.823 17:55:44 -- nvmf/common.sh@297 -- # local -ga x722 00:08:55.823 17:55:44 -- nvmf/common.sh@298 -- # mlx=() 00:08:55.823 17:55:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:55.823 17:55:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:55.823 17:55:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:55.823 17:55:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:55.823 17:55:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:55.823 17:55:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:55.823 17:55:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:55.823 17:55:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:55.823 17:55:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:55.823 17:55:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:55.823 17:55:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:55.823 17:55:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:55.823 17:55:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:55.823 17:55:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:55.823 17:55:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:55.823 17:55:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:55.823 17:55:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:55.823 17:55:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:55.823 17:55:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:55.823 17:55:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:55.823 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:55.823 17:55:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:55.823 17:55:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:55.823 17:55:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.823 17:55:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.823 17:55:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:55.823 17:55:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:55.823 17:55:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:55.823 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:55.823 17:55:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:55.823 17:55:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:55.823 17:55:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.823 17:55:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.823 17:55:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:55.823 17:55:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:55.823 17:55:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:55.823 17:55:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:55.823 17:55:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:55.823 17:55:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.823 17:55:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:55.823 17:55:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.823 17:55:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:55.823 Found net devices under 0000:84:00.0: cvl_0_0 00:08:55.823 17:55:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.823 17:55:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:55.823 17:55:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.823 17:55:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:55.823 17:55:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.823 17:55:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:55.823 Found net devices under 0000:84:00.1: cvl_0_1 00:08:55.823 17:55:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.823 17:55:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:55.823 17:55:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:55.823 17:55:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:55.823 17:55:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:55.823 17:55:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:55.823 17:55:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.823 17:55:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.823 17:55:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:55.823 17:55:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:55.823 17:55:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:55.823 17:55:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:55.823 17:55:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:55.823 17:55:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:55.823 17:55:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.823 17:55:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:55.823 17:55:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:55.823 17:55:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:55.823 17:55:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:55.824 17:55:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:55.824 17:55:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:55.824 17:55:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:55.824 17:55:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:55.824 17:55:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:55.824 17:55:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:55.824 17:55:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:55.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:08:55.824 00:08:55.824 --- 10.0.0.2 ping statistics --- 00:08:55.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.824 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:08:55.824 17:55:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:55.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:08:55.824 00:08:55.824 --- 10.0.0.1 ping statistics --- 00:08:55.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.824 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:08:55.824 17:55:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.824 17:55:44 -- nvmf/common.sh@411 -- # return 0 00:08:55.824 17:55:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:55.824 17:55:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.824 17:55:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:55.824 17:55:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:55.824 17:55:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.824 17:55:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:55.824 17:55:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:55.824 17:55:44 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:55.824 17:55:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:55.824 17:55:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:55.824 17:55:44 -- common/autotest_common.sh@10 -- # set +x 00:08:55.824 17:55:44 -- nvmf/common.sh@470 -- # nvmfpid=3227050 00:08:55.824 17:55:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:55.824 17:55:44 -- nvmf/common.sh@471 -- # waitforlisten 3227050 00:08:55.824 17:55:44 -- common/autotest_common.sh@817 -- # '[' -z 3227050 ']' 00:08:55.824 17:55:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.824 17:55:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:55.824 17:55:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.824 17:55:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:55.824 17:55:44 -- common/autotest_common.sh@10 -- # set +x 00:08:55.824 [2024-04-15 17:55:44.683743] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:08:55.824 [2024-04-15 17:55:44.683834] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.824 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.824 [2024-04-15 17:55:44.760793] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.083 [2024-04-15 17:55:44.858921] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.083 [2024-04-15 17:55:44.858990] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.083 [2024-04-15 17:55:44.859007] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.083 [2024-04-15 17:55:44.859022] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.083 [2024-04-15 17:55:44.859035] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.083 [2024-04-15 17:55:44.859111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.083 [2024-04-15 17:55:44.859166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.083 [2024-04-15 17:55:44.859217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.083 [2024-04-15 17:55:44.859220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.083 17:55:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:56.083 17:55:45 -- common/autotest_common.sh@850 -- # return 0 00:08:56.083 17:55:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:56.083 17:55:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:56.083 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.083 17:55:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.083 17:55:45 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:56.083 17:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.083 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.083 [2024-04-15 17:55:45.030018] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.343 17:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.343 17:55:45 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:56.343 17:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.343 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.343 [2024-04-15 17:55:45.042241] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:56.343 17:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.343 17:55:45 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:56.343 17:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.343 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.343 17:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.343 17:55:45 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:56.343 17:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.343 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.343 17:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.343 17:55:45 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:56.343 17:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.343 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.343 17:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.343 17:55:45 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:56.343 17:55:45 -- target/referrals.sh@48 -- # jq length 00:08:56.343 17:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.343 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.343 17:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.343 17:55:45 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:56.343 17:55:45 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:56.343 17:55:45 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:56.343 17:55:45 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:56.343 17:55:45 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:56.343 17:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.343 17:55:45 -- target/referrals.sh@21 -- # sort 00:08:56.343 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.343 17:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.343 17:55:45 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:56.343 17:55:45 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:56.343 17:55:45 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:56.343 17:55:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:56.343 17:55:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:56.343 17:55:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.343 17:55:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:56.343 17:55:45 -- target/referrals.sh@26 -- # sort 00:08:56.602 17:55:45 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:56.602 17:55:45 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:56.602 17:55:45 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:56.602 17:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.602 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.602 17:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.602 17:55:45 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:56.602 17:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.602 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.602 17:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.602 17:55:45 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:56.602 17:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.602 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.602 17:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.602 17:55:45 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:56.602 17:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.602 17:55:45 -- target/referrals.sh@56 -- # jq length 00:08:56.602 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.602 17:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.602 17:55:45 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:56.602 17:55:45 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:56.602 17:55:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:56.602 17:55:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:56.602 17:55:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.602 17:55:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:56.602 17:55:45 -- target/referrals.sh@26 -- # sort 00:08:56.602 17:55:45 -- target/referrals.sh@26 -- # echo 00:08:56.602 17:55:45 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:56.602 17:55:45 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:56.602 17:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.602 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.602 17:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.602 17:55:45 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:56.602 17:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.602 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.602 17:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.602 17:55:45 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:56.602 17:55:45 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:56.602 17:55:45 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:56.602 17:55:45 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:56.602 17:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.602 17:55:45 -- target/referrals.sh@21 -- # sort 00:08:56.602 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.602 17:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.602 17:55:45 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:56.602 17:55:45 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:56.602 17:55:45 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:56.602 17:55:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:56.603 17:55:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:56.603 17:55:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.603 17:55:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:56.603 17:55:45 -- target/referrals.sh@26 -- # sort 00:08:56.861 17:55:45 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:56.861 17:55:45 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:56.861 17:55:45 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:56.861 17:55:45 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:56.861 17:55:45 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:56.861 17:55:45 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.861 17:55:45 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:56.861 17:55:45 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:56.861 17:55:45 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:56.861 17:55:45 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:56.861 17:55:45 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:56.861 17:55:45 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.861 17:55:45 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:57.120 17:55:45 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:57.120 17:55:45 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:57.120 17:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.120 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:57.120 17:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.120 17:55:45 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:57.120 17:55:45 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:57.120 17:55:45 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:57.120 17:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.120 17:55:45 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:57.120 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:57.120 17:55:45 -- target/referrals.sh@21 -- # sort 00:08:57.120 17:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.120 17:55:45 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:57.120 17:55:45 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:57.120 17:55:45 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:57.120 17:55:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:57.120 17:55:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:57.120 17:55:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:57.120 17:55:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:57.120 17:55:45 -- target/referrals.sh@26 -- # sort 00:08:57.120 17:55:46 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:57.120 17:55:46 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:57.120 17:55:46 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:57.120 17:55:46 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:57.120 17:55:46 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:57.120 17:55:46 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:57.120 17:55:46 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:57.379 17:55:46 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:57.379 17:55:46 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:57.379 17:55:46 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:57.379 17:55:46 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:57.379 17:55:46 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:57.379 17:55:46 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:57.379 17:55:46 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:57.379 17:55:46 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:57.379 17:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.379 17:55:46 -- common/autotest_common.sh@10 -- # set +x 00:08:57.379 17:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.379 17:55:46 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:57.379 17:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.379 17:55:46 -- target/referrals.sh@82 -- # jq length 00:08:57.379 17:55:46 -- common/autotest_common.sh@10 -- # set +x 00:08:57.379 17:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.639 17:55:46 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:57.639 17:55:46 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:57.639 17:55:46 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:57.639 17:55:46 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:57.639 17:55:46 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:57.639 17:55:46 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:57.639 17:55:46 -- target/referrals.sh@26 -- # sort 00:08:57.639 17:55:46 -- target/referrals.sh@26 -- # echo 00:08:57.639 17:55:46 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:57.639 17:55:46 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:57.639 17:55:46 -- target/referrals.sh@86 -- # nvmftestfini 00:08:57.639 17:55:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:57.639 17:55:46 -- nvmf/common.sh@117 -- # sync 00:08:57.639 17:55:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:57.639 17:55:46 -- nvmf/common.sh@120 -- # set +e 00:08:57.639 17:55:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:57.639 17:55:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:57.639 rmmod nvme_tcp 00:08:57.639 rmmod nvme_fabrics 00:08:57.639 rmmod nvme_keyring 00:08:57.639 17:55:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.639 17:55:46 -- nvmf/common.sh@124 -- # set -e 00:08:57.639 17:55:46 -- nvmf/common.sh@125 -- # return 0 00:08:57.639 17:55:46 -- nvmf/common.sh@478 -- # '[' -n 3227050 ']' 00:08:57.639 17:55:46 -- nvmf/common.sh@479 -- # killprocess 3227050 00:08:57.639 17:55:46 -- common/autotest_common.sh@936 -- # '[' -z 3227050 ']' 00:08:57.639 17:55:46 -- common/autotest_common.sh@940 -- # kill -0 3227050 00:08:57.639 17:55:46 -- common/autotest_common.sh@941 -- # uname 00:08:57.639 17:55:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:57.639 17:55:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3227050 00:08:57.639 17:55:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:57.639 17:55:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:57.639 17:55:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3227050' 00:08:57.639 killing process with pid 3227050 00:08:57.639 17:55:46 -- common/autotest_common.sh@955 -- # kill 3227050 00:08:57.639 17:55:46 -- common/autotest_common.sh@960 -- # wait 3227050 00:08:57.899 17:55:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:57.899 17:55:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:57.899 17:55:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:57.899 17:55:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.899 17:55:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:57.899 17:55:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.899 17:55:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.899 17:55:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.437 17:55:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:00.437 00:09:00.437 real 0m6.789s 00:09:00.437 user 0m9.367s 00:09:00.437 sys 0m2.287s 00:09:00.437 17:55:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:00.437 17:55:48 -- common/autotest_common.sh@10 -- # set +x 00:09:00.437 ************************************ 00:09:00.437 END TEST nvmf_referrals 00:09:00.437 ************************************ 00:09:00.437 17:55:48 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:00.437 17:55:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:00.437 17:55:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.437 17:55:48 -- common/autotest_common.sh@10 -- # set +x 00:09:00.437 ************************************ 00:09:00.437 START TEST nvmf_connect_disconnect 00:09:00.437 ************************************ 00:09:00.437 17:55:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:00.437 * Looking for test storage... 00:09:00.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.437 17:55:49 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.437 17:55:49 -- nvmf/common.sh@7 -- # uname -s 00:09:00.438 17:55:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.438 17:55:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.438 17:55:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.438 17:55:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.438 17:55:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.438 17:55:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.438 17:55:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.438 17:55:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.438 17:55:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.438 17:55:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.438 17:55:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:00.438 17:55:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:00.438 17:55:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.438 17:55:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.438 17:55:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.438 17:55:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.438 17:55:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.438 17:55:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.438 17:55:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.438 17:55:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.438 17:55:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.438 17:55:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.438 17:55:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.438 17:55:49 -- paths/export.sh@5 -- # export PATH 00:09:00.438 17:55:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.438 17:55:49 -- nvmf/common.sh@47 -- # : 0 00:09:00.438 17:55:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.438 17:55:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.438 17:55:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.438 17:55:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.438 17:55:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.438 17:55:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.438 17:55:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.438 17:55:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.438 17:55:49 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:00.438 17:55:49 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:00.438 17:55:49 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:00.438 17:55:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:00.438 17:55:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.438 17:55:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:00.438 17:55:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:00.438 17:55:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:00.438 17:55:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.438 17:55:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.438 17:55:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.438 17:55:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:00.438 17:55:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:00.438 17:55:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.438 17:55:49 -- common/autotest_common.sh@10 -- # set +x 00:09:02.974 17:55:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:02.974 17:55:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:02.974 17:55:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:02.974 17:55:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:02.974 17:55:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:02.974 17:55:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:02.975 17:55:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:02.975 17:55:51 -- nvmf/common.sh@295 -- # net_devs=() 00:09:02.975 17:55:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:02.975 17:55:51 -- nvmf/common.sh@296 -- # e810=() 00:09:02.975 17:55:51 -- nvmf/common.sh@296 -- # local -ga e810 00:09:02.975 17:55:51 -- nvmf/common.sh@297 -- # x722=() 00:09:02.975 17:55:51 -- nvmf/common.sh@297 -- # local -ga x722 00:09:02.975 17:55:51 -- nvmf/common.sh@298 -- # mlx=() 00:09:02.975 17:55:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:02.975 17:55:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.975 17:55:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.975 17:55:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.975 17:55:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.975 17:55:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.975 17:55:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.975 17:55:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.975 17:55:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.975 17:55:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.975 17:55:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.975 17:55:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.975 17:55:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:02.975 17:55:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:02.975 17:55:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:02.975 17:55:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.975 17:55:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:02.975 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:02.975 17:55:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.975 17:55:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:02.975 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:02.975 17:55:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:02.975 17:55:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.975 17:55:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.975 17:55:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:02.975 17:55:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.975 17:55:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:02.975 Found net devices under 0000:84:00.0: cvl_0_0 00:09:02.975 17:55:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.975 17:55:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.975 17:55:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.975 17:55:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:02.975 17:55:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.975 17:55:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:02.975 Found net devices under 0000:84:00.1: cvl_0_1 00:09:02.975 17:55:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.975 17:55:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:02.975 17:55:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:02.975 17:55:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:02.975 17:55:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.975 17:55:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.975 17:55:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.975 17:55:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:02.975 17:55:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.975 17:55:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.975 17:55:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:02.975 17:55:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.975 17:55:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.975 17:55:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:02.975 17:55:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:02.975 17:55:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.975 17:55:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.975 17:55:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.975 17:55:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.975 17:55:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:02.975 17:55:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.975 17:55:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.975 17:55:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.975 17:55:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:02.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:09:02.975 00:09:02.975 --- 10.0.0.2 ping statistics --- 00:09:02.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.975 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:09:02.975 17:55:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:09:02.975 00:09:02.975 --- 10.0.0.1 ping statistics --- 00:09:02.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.975 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:09:02.975 17:55:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.975 17:55:51 -- nvmf/common.sh@411 -- # return 0 00:09:02.975 17:55:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:02.975 17:55:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.975 17:55:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:02.975 17:55:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.975 17:55:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:02.975 17:55:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:02.975 17:55:51 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:02.975 17:55:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:02.975 17:55:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:02.975 17:55:51 -- common/autotest_common.sh@10 -- # set +x 00:09:02.975 17:55:51 -- nvmf/common.sh@470 -- # nvmfpid=3229367 00:09:02.975 17:55:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:02.975 17:55:51 -- nvmf/common.sh@471 -- # waitforlisten 3229367 00:09:02.975 17:55:51 -- common/autotest_common.sh@817 -- # '[' -z 3229367 ']' 00:09:02.975 17:55:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.975 17:55:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:02.975 17:55:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.975 17:55:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:02.975 17:55:51 -- common/autotest_common.sh@10 -- # set +x 00:09:02.975 [2024-04-15 17:55:51.597205] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:09:02.975 [2024-04-15 17:55:51.597294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.975 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.975 [2024-04-15 17:55:51.705115] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.975 [2024-04-15 17:55:51.803346] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.975 [2024-04-15 17:55:51.803417] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.975 [2024-04-15 17:55:51.803442] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.975 [2024-04-15 17:55:51.803458] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.975 [2024-04-15 17:55:51.803471] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.975 [2024-04-15 17:55:51.803558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.975 [2024-04-15 17:55:51.803612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.975 [2024-04-15 17:55:51.803662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.975 [2024-04-15 17:55:51.803665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.233 17:55:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:03.233 17:55:51 -- common/autotest_common.sh@850 -- # return 0 00:09:03.233 17:55:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:03.233 17:55:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:03.233 17:55:51 -- common/autotest_common.sh@10 -- # set +x 00:09:03.233 17:55:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.233 17:55:51 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:03.233 17:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.233 17:55:51 -- common/autotest_common.sh@10 -- # set +x 00:09:03.233 [2024-04-15 17:55:51.971967] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.233 17:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.233 17:55:51 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:03.233 17:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.233 17:55:51 -- common/autotest_common.sh@10 -- # set +x 00:09:03.233 17:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.233 17:55:52 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:03.233 17:55:52 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:03.233 17:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.233 17:55:52 -- common/autotest_common.sh@10 -- # set +x 00:09:03.233 17:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.233 17:55:52 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:03.233 17:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.233 17:55:52 -- common/autotest_common.sh@10 -- # set +x 00:09:03.233 17:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.233 17:55:52 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.233 17:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.233 17:55:52 -- common/autotest_common.sh@10 -- # set +x 00:09:03.233 [2024-04-15 17:55:52.034099] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.233 17:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.233 17:55:52 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:03.233 17:55:52 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:03.233 17:55:52 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:03.233 17:55:52 -- target/connect_disconnect.sh@34 -- # set +x 00:09:05.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.415 17:59:39 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:50.415 17:59:39 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:50.415 17:59:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:50.415 17:59:39 -- nvmf/common.sh@117 -- # sync 00:12:50.415 17:59:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.415 17:59:39 -- nvmf/common.sh@120 -- # set +e 00:12:50.415 17:59:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.415 17:59:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.415 rmmod nvme_tcp 00:12:50.415 rmmod nvme_fabrics 00:12:50.415 rmmod nvme_keyring 00:12:50.415 17:59:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.415 17:59:39 -- nvmf/common.sh@124 -- # set -e 00:12:50.415 17:59:39 -- nvmf/common.sh@125 -- # return 0 00:12:50.415 17:59:39 -- nvmf/common.sh@478 -- # '[' -n 3229367 ']' 00:12:50.415 17:59:39 -- nvmf/common.sh@479 -- # killprocess 3229367 00:12:50.415 17:59:39 -- common/autotest_common.sh@936 -- # '[' -z 3229367 ']' 00:12:50.415 17:59:39 -- common/autotest_common.sh@940 -- # kill -0 3229367 00:12:50.415 17:59:39 -- common/autotest_common.sh@941 -- # uname 00:12:50.415 17:59:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:50.415 17:59:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3229367 00:12:50.674 17:59:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:50.674 17:59:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:50.674 17:59:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3229367' 00:12:50.674 killing process with pid 3229367 00:12:50.674 17:59:39 -- common/autotest_common.sh@955 -- # kill 3229367 00:12:50.674 17:59:39 -- common/autotest_common.sh@960 -- # wait 3229367 00:12:50.933 17:59:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:50.933 17:59:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:50.933 17:59:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:50.933 17:59:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.933 17:59:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:50.933 17:59:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.933 17:59:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.933 17:59:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.837 17:59:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:52.837 00:12:52.837 real 3m52.680s 00:12:52.837 user 14m41.803s 00:12:52.837 sys 0m35.168s 00:12:52.837 17:59:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:52.837 17:59:41 -- common/autotest_common.sh@10 -- # set +x 00:12:52.837 ************************************ 00:12:52.837 END TEST nvmf_connect_disconnect 00:12:52.837 ************************************ 00:12:52.837 17:59:41 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:52.837 17:59:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:52.837 17:59:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:52.837 17:59:41 -- common/autotest_common.sh@10 -- # set +x 00:12:53.096 ************************************ 00:12:53.096 START TEST nvmf_multitarget 00:12:53.096 ************************************ 00:12:53.096 17:59:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:53.096 * Looking for test storage... 00:12:53.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:53.096 17:59:41 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.096 17:59:41 -- nvmf/common.sh@7 -- # uname -s 00:12:53.096 17:59:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.096 17:59:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.096 17:59:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.096 17:59:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.096 17:59:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.096 17:59:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.096 17:59:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.096 17:59:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.096 17:59:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.096 17:59:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.096 17:59:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:53.097 17:59:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:53.097 17:59:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.097 17:59:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.097 17:59:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.097 17:59:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.097 17:59:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:53.097 17:59:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.097 17:59:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.097 17:59:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.097 17:59:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.097 17:59:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.097 17:59:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.097 17:59:41 -- paths/export.sh@5 -- # export PATH 00:12:53.097 17:59:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.097 17:59:41 -- nvmf/common.sh@47 -- # : 0 00:12:53.097 17:59:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:53.097 17:59:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:53.097 17:59:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.097 17:59:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.097 17:59:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.097 17:59:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:53.097 17:59:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:53.097 17:59:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:53.097 17:59:41 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:53.097 17:59:41 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:53.097 17:59:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:53.097 17:59:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.097 17:59:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:53.097 17:59:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:53.097 17:59:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:53.097 17:59:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.097 17:59:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.097 17:59:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.097 17:59:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:53.097 17:59:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:53.097 17:59:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:53.097 17:59:41 -- common/autotest_common.sh@10 -- # set +x 00:12:55.625 17:59:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:55.625 17:59:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:55.625 17:59:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:55.625 17:59:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:55.625 17:59:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:55.625 17:59:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:55.625 17:59:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:55.625 17:59:44 -- nvmf/common.sh@295 -- # net_devs=() 00:12:55.625 17:59:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:55.625 17:59:44 -- nvmf/common.sh@296 -- # e810=() 00:12:55.625 17:59:44 -- nvmf/common.sh@296 -- # local -ga e810 00:12:55.625 17:59:44 -- nvmf/common.sh@297 -- # x722=() 00:12:55.625 17:59:44 -- nvmf/common.sh@297 -- # local -ga x722 00:12:55.625 17:59:44 -- nvmf/common.sh@298 -- # mlx=() 00:12:55.625 17:59:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:55.625 17:59:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:55.625 17:59:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:55.625 17:59:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:55.625 17:59:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:55.625 17:59:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:55.625 17:59:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:55.625 17:59:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:55.625 17:59:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:55.625 17:59:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:55.625 17:59:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:55.625 17:59:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:55.625 17:59:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:55.625 17:59:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:55.625 17:59:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:55.625 17:59:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.625 17:59:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:55.625 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:55.625 17:59:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.625 17:59:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:55.625 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:55.625 17:59:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:55.625 17:59:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.625 17:59:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.625 17:59:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:55.625 17:59:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.625 17:59:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:55.625 Found net devices under 0000:84:00.0: cvl_0_0 00:12:55.625 17:59:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.625 17:59:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.625 17:59:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.625 17:59:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:55.625 17:59:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.625 17:59:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:55.625 Found net devices under 0000:84:00.1: cvl_0_1 00:12:55.625 17:59:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.625 17:59:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:55.625 17:59:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:55.625 17:59:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:55.625 17:59:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.625 17:59:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.625 17:59:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:55.625 17:59:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:55.625 17:59:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:55.625 17:59:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:55.625 17:59:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:55.625 17:59:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:55.625 17:59:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.625 17:59:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:55.625 17:59:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:55.625 17:59:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:55.625 17:59:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:55.625 17:59:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:55.625 17:59:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:55.625 17:59:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:55.625 17:59:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:55.625 17:59:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:55.625 17:59:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:55.625 17:59:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:55.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:12:55.625 00:12:55.625 --- 10.0.0.2 ping statistics --- 00:12:55.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.625 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:12:55.625 17:59:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:55.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:12:55.625 00:12:55.625 --- 10.0.0.1 ping statistics --- 00:12:55.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.625 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:12:55.625 17:59:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.625 17:59:44 -- nvmf/common.sh@411 -- # return 0 00:12:55.625 17:59:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:55.625 17:59:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.625 17:59:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:55.625 17:59:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.625 17:59:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:55.625 17:59:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:55.625 17:59:44 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:55.625 17:59:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:55.625 17:59:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:55.625 17:59:44 -- common/autotest_common.sh@10 -- # set +x 00:12:55.625 17:59:44 -- nvmf/common.sh@470 -- # nvmfpid=3259732 00:12:55.625 17:59:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:55.625 17:59:44 -- nvmf/common.sh@471 -- # waitforlisten 3259732 00:12:55.625 17:59:44 -- common/autotest_common.sh@817 -- # '[' -z 3259732 ']' 00:12:55.625 17:59:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.625 17:59:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:55.625 17:59:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.625 17:59:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:55.625 17:59:44 -- common/autotest_common.sh@10 -- # set +x 00:12:55.625 [2024-04-15 17:59:44.542722] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:12:55.625 [2024-04-15 17:59:44.542897] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.883 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.883 [2024-04-15 17:59:44.664775] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.883 [2024-04-15 17:59:44.761464] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.883 [2024-04-15 17:59:44.761542] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.883 [2024-04-15 17:59:44.761560] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.883 [2024-04-15 17:59:44.761574] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.883 [2024-04-15 17:59:44.761587] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.883 [2024-04-15 17:59:44.761691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.883 [2024-04-15 17:59:44.761748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.883 [2024-04-15 17:59:44.761800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.883 [2024-04-15 17:59:44.761802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.141 17:59:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:56.141 17:59:44 -- common/autotest_common.sh@850 -- # return 0 00:12:56.141 17:59:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:56.141 17:59:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:56.141 17:59:44 -- common/autotest_common.sh@10 -- # set +x 00:12:56.141 17:59:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.141 17:59:44 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:56.141 17:59:44 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:56.141 17:59:44 -- target/multitarget.sh@21 -- # jq length 00:12:56.141 17:59:45 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:56.141 17:59:45 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:56.398 "nvmf_tgt_1" 00:12:56.398 17:59:45 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:56.398 "nvmf_tgt_2" 00:12:56.656 17:59:45 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:56.656 17:59:45 -- target/multitarget.sh@28 -- # jq length 00:12:56.656 17:59:45 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:56.656 17:59:45 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:56.914 true 00:12:56.914 17:59:45 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:57.171 true 00:12:57.171 17:59:46 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:57.171 17:59:46 -- target/multitarget.sh@35 -- # jq length 00:12:57.429 17:59:46 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:57.429 17:59:46 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:57.429 17:59:46 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:57.429 17:59:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:57.429 17:59:46 -- nvmf/common.sh@117 -- # sync 00:12:57.429 17:59:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:57.429 17:59:46 -- nvmf/common.sh@120 -- # set +e 00:12:57.430 17:59:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:57.430 17:59:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:57.430 rmmod nvme_tcp 00:12:57.430 rmmod nvme_fabrics 00:12:57.430 rmmod nvme_keyring 00:12:57.430 17:59:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:57.430 17:59:46 -- nvmf/common.sh@124 -- # set -e 00:12:57.430 17:59:46 -- nvmf/common.sh@125 -- # return 0 00:12:57.430 17:59:46 -- nvmf/common.sh@478 -- # '[' -n 3259732 ']' 00:12:57.430 17:59:46 -- nvmf/common.sh@479 -- # killprocess 3259732 00:12:57.430 17:59:46 -- common/autotest_common.sh@936 -- # '[' -z 3259732 ']' 00:12:57.430 17:59:46 -- common/autotest_common.sh@940 -- # kill -0 3259732 00:12:57.430 17:59:46 -- common/autotest_common.sh@941 -- # uname 00:12:57.430 17:59:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:57.430 17:59:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3259732 00:12:57.430 17:59:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:57.430 17:59:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:57.430 17:59:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3259732' 00:12:57.430 killing process with pid 3259732 00:12:57.430 17:59:46 -- common/autotest_common.sh@955 -- # kill 3259732 00:12:57.430 17:59:46 -- common/autotest_common.sh@960 -- # wait 3259732 00:12:57.687 17:59:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:57.687 17:59:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:57.687 17:59:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:57.687 17:59:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:57.687 17:59:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:57.687 17:59:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.687 17:59:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.687 17:59:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.218 17:59:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:00.218 00:13:00.218 real 0m6.778s 00:13:00.218 user 0m9.397s 00:13:00.218 sys 0m2.409s 00:13:00.218 17:59:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:00.218 17:59:48 -- common/autotest_common.sh@10 -- # set +x 00:13:00.218 ************************************ 00:13:00.218 END TEST nvmf_multitarget 00:13:00.218 ************************************ 00:13:00.218 17:59:48 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:00.218 17:59:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:00.218 17:59:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:00.218 17:59:48 -- common/autotest_common.sh@10 -- # set +x 00:13:00.218 ************************************ 00:13:00.218 START TEST nvmf_rpc 00:13:00.218 ************************************ 00:13:00.218 17:59:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:00.218 * Looking for test storage... 00:13:00.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.218 17:59:48 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.218 17:59:48 -- nvmf/common.sh@7 -- # uname -s 00:13:00.218 17:59:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.218 17:59:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.218 17:59:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.218 17:59:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.218 17:59:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.218 17:59:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.218 17:59:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.218 17:59:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.218 17:59:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.218 17:59:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.218 17:59:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:00.218 17:59:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:00.218 17:59:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.218 17:59:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.218 17:59:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.218 17:59:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.218 17:59:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.218 17:59:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.218 17:59:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.218 17:59:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.218 17:59:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.218 17:59:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.218 17:59:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.218 17:59:48 -- paths/export.sh@5 -- # export PATH 00:13:00.218 17:59:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.218 17:59:48 -- nvmf/common.sh@47 -- # : 0 00:13:00.218 17:59:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.218 17:59:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.218 17:59:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.218 17:59:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.218 17:59:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.218 17:59:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.218 17:59:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.218 17:59:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.218 17:59:48 -- target/rpc.sh@11 -- # loops=5 00:13:00.218 17:59:48 -- target/rpc.sh@23 -- # nvmftestinit 00:13:00.218 17:59:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:00.218 17:59:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.218 17:59:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:00.218 17:59:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:00.218 17:59:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:00.218 17:59:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.218 17:59:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.218 17:59:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.218 17:59:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:00.218 17:59:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:00.218 17:59:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:00.218 17:59:48 -- common/autotest_common.sh@10 -- # set +x 00:13:02.118 17:59:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:02.118 17:59:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:02.118 17:59:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:02.118 17:59:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:02.118 17:59:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:02.118 17:59:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:02.118 17:59:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:02.118 17:59:50 -- nvmf/common.sh@295 -- # net_devs=() 00:13:02.118 17:59:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:02.118 17:59:50 -- nvmf/common.sh@296 -- # e810=() 00:13:02.118 17:59:50 -- nvmf/common.sh@296 -- # local -ga e810 00:13:02.118 17:59:50 -- nvmf/common.sh@297 -- # x722=() 00:13:02.118 17:59:50 -- nvmf/common.sh@297 -- # local -ga x722 00:13:02.118 17:59:50 -- nvmf/common.sh@298 -- # mlx=() 00:13:02.118 17:59:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:02.118 17:59:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.118 17:59:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.118 17:59:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.118 17:59:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.118 17:59:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.118 17:59:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.118 17:59:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.118 17:59:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.118 17:59:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.118 17:59:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.118 17:59:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.118 17:59:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:02.118 17:59:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:02.118 17:59:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:02.118 17:59:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:02.118 17:59:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:02.118 17:59:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:02.118 17:59:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.118 17:59:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:02.118 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:02.118 17:59:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:02.118 17:59:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:02.118 17:59:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.119 17:59:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.119 17:59:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:02.119 17:59:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.119 17:59:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:02.119 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:02.119 17:59:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:02.119 17:59:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:02.119 17:59:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.119 17:59:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.119 17:59:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:02.119 17:59:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:02.119 17:59:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:02.119 17:59:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:02.119 17:59:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.119 17:59:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.119 17:59:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:02.119 17:59:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.119 17:59:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:02.119 Found net devices under 0000:84:00.0: cvl_0_0 00:13:02.119 17:59:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.119 17:59:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.119 17:59:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.119 17:59:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:02.119 17:59:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.119 17:59:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:02.119 Found net devices under 0000:84:00.1: cvl_0_1 00:13:02.119 17:59:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.119 17:59:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:02.119 17:59:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:02.119 17:59:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:02.119 17:59:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:02.119 17:59:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:02.119 17:59:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.119 17:59:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.119 17:59:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.119 17:59:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:02.119 17:59:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.119 17:59:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.119 17:59:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:02.119 17:59:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.119 17:59:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.119 17:59:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:02.119 17:59:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:02.119 17:59:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.119 17:59:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.378 17:59:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.378 17:59:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.378 17:59:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:02.378 17:59:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.378 17:59:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.378 17:59:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.378 17:59:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:02.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:13:02.378 00:13:02.378 --- 10.0.0.2 ping statistics --- 00:13:02.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.378 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:13:02.378 17:59:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:13:02.378 00:13:02.378 --- 10.0.0.1 ping statistics --- 00:13:02.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.378 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:13:02.378 17:59:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.378 17:59:51 -- nvmf/common.sh@411 -- # return 0 00:13:02.378 17:59:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:02.378 17:59:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.378 17:59:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:02.378 17:59:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:02.378 17:59:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.378 17:59:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:02.378 17:59:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:02.378 17:59:51 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:02.378 17:59:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:02.378 17:59:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:02.378 17:59:51 -- common/autotest_common.sh@10 -- # set +x 00:13:02.378 17:59:51 -- nvmf/common.sh@470 -- # nvmfpid=3261985 00:13:02.378 17:59:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.378 17:59:51 -- nvmf/common.sh@471 -- # waitforlisten 3261985 00:13:02.378 17:59:51 -- common/autotest_common.sh@817 -- # '[' -z 3261985 ']' 00:13:02.378 17:59:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.378 17:59:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:02.378 17:59:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.378 17:59:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:02.378 17:59:51 -- common/autotest_common.sh@10 -- # set +x 00:13:02.378 [2024-04-15 17:59:51.224608] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:13:02.378 [2024-04-15 17:59:51.224699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.378 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.378 [2024-04-15 17:59:51.306371] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.636 [2024-04-15 17:59:51.404455] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.636 [2024-04-15 17:59:51.404517] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.636 [2024-04-15 17:59:51.404535] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.636 [2024-04-15 17:59:51.404551] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.636 [2024-04-15 17:59:51.404565] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.636 [2024-04-15 17:59:51.407081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.636 [2024-04-15 17:59:51.407110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.636 [2024-04-15 17:59:51.407162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.636 [2024-04-15 17:59:51.407166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.636 17:59:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:02.636 17:59:51 -- common/autotest_common.sh@850 -- # return 0 00:13:02.636 17:59:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:02.636 17:59:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:02.636 17:59:51 -- common/autotest_common.sh@10 -- # set +x 00:13:02.636 17:59:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.636 17:59:51 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:02.636 17:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.636 17:59:51 -- common/autotest_common.sh@10 -- # set +x 00:13:02.894 17:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.894 17:59:51 -- target/rpc.sh@26 -- # stats='{ 00:13:02.894 "tick_rate": 2700000000, 00:13:02.894 "poll_groups": [ 00:13:02.894 { 00:13:02.894 "name": "nvmf_tgt_poll_group_0", 00:13:02.894 "admin_qpairs": 0, 00:13:02.894 "io_qpairs": 0, 00:13:02.894 "current_admin_qpairs": 0, 00:13:02.894 "current_io_qpairs": 0, 00:13:02.894 "pending_bdev_io": 0, 00:13:02.894 "completed_nvme_io": 0, 00:13:02.894 "transports": [] 00:13:02.894 }, 00:13:02.894 { 00:13:02.894 "name": "nvmf_tgt_poll_group_1", 00:13:02.894 "admin_qpairs": 0, 00:13:02.894 "io_qpairs": 0, 00:13:02.894 "current_admin_qpairs": 0, 00:13:02.894 "current_io_qpairs": 0, 00:13:02.894 "pending_bdev_io": 0, 00:13:02.894 "completed_nvme_io": 0, 00:13:02.894 "transports": [] 00:13:02.894 }, 00:13:02.894 { 00:13:02.894 "name": "nvmf_tgt_poll_group_2", 00:13:02.894 "admin_qpairs": 0, 00:13:02.894 "io_qpairs": 0, 00:13:02.894 "current_admin_qpairs": 0, 00:13:02.894 "current_io_qpairs": 0, 00:13:02.894 "pending_bdev_io": 0, 00:13:02.894 "completed_nvme_io": 0, 00:13:02.894 "transports": [] 00:13:02.894 }, 00:13:02.894 { 00:13:02.894 "name": "nvmf_tgt_poll_group_3", 00:13:02.894 "admin_qpairs": 0, 00:13:02.894 "io_qpairs": 0, 00:13:02.894 "current_admin_qpairs": 0, 00:13:02.894 "current_io_qpairs": 0, 00:13:02.894 "pending_bdev_io": 0, 00:13:02.894 "completed_nvme_io": 0, 00:13:02.894 "transports": [] 00:13:02.894 } 00:13:02.894 ] 00:13:02.894 }' 00:13:02.894 17:59:51 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:02.894 17:59:51 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:02.894 17:59:51 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:02.894 17:59:51 -- target/rpc.sh@15 -- # wc -l 00:13:02.894 17:59:51 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:02.894 17:59:51 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:02.894 17:59:51 -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:02.895 17:59:51 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:02.895 17:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.895 17:59:51 -- common/autotest_common.sh@10 -- # set +x 00:13:02.895 [2024-04-15 17:59:51.761659] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.895 17:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.895 17:59:51 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:02.895 17:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.895 17:59:51 -- common/autotest_common.sh@10 -- # set +x 00:13:02.895 17:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.895 17:59:51 -- target/rpc.sh@33 -- # stats='{ 00:13:02.895 "tick_rate": 2700000000, 00:13:02.895 "poll_groups": [ 00:13:02.895 { 00:13:02.895 "name": "nvmf_tgt_poll_group_0", 00:13:02.895 "admin_qpairs": 0, 00:13:02.895 "io_qpairs": 0, 00:13:02.895 "current_admin_qpairs": 0, 00:13:02.895 "current_io_qpairs": 0, 00:13:02.895 "pending_bdev_io": 0, 00:13:02.895 "completed_nvme_io": 0, 00:13:02.895 "transports": [ 00:13:02.895 { 00:13:02.895 "trtype": "TCP" 00:13:02.895 } 00:13:02.895 ] 00:13:02.895 }, 00:13:02.895 { 00:13:02.895 "name": "nvmf_tgt_poll_group_1", 00:13:02.895 "admin_qpairs": 0, 00:13:02.895 "io_qpairs": 0, 00:13:02.895 "current_admin_qpairs": 0, 00:13:02.895 "current_io_qpairs": 0, 00:13:02.895 "pending_bdev_io": 0, 00:13:02.895 "completed_nvme_io": 0, 00:13:02.895 "transports": [ 00:13:02.895 { 00:13:02.895 "trtype": "TCP" 00:13:02.895 } 00:13:02.895 ] 00:13:02.895 }, 00:13:02.895 { 00:13:02.895 "name": "nvmf_tgt_poll_group_2", 00:13:02.895 "admin_qpairs": 0, 00:13:02.895 "io_qpairs": 0, 00:13:02.895 "current_admin_qpairs": 0, 00:13:02.895 "current_io_qpairs": 0, 00:13:02.895 "pending_bdev_io": 0, 00:13:02.895 "completed_nvme_io": 0, 00:13:02.895 "transports": [ 00:13:02.895 { 00:13:02.895 "trtype": "TCP" 00:13:02.895 } 00:13:02.895 ] 00:13:02.895 }, 00:13:02.895 { 00:13:02.895 "name": "nvmf_tgt_poll_group_3", 00:13:02.895 "admin_qpairs": 0, 00:13:02.895 "io_qpairs": 0, 00:13:02.895 "current_admin_qpairs": 0, 00:13:02.895 "current_io_qpairs": 0, 00:13:02.895 "pending_bdev_io": 0, 00:13:02.895 "completed_nvme_io": 0, 00:13:02.895 "transports": [ 00:13:02.895 { 00:13:02.895 "trtype": "TCP" 00:13:02.895 } 00:13:02.895 ] 00:13:02.895 } 00:13:02.895 ] 00:13:02.895 }' 00:13:02.895 17:59:51 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:02.895 17:59:51 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:02.895 17:59:51 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:02.895 17:59:51 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:02.895 17:59:51 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:02.895 17:59:51 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:02.895 17:59:51 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:02.895 17:59:51 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:02.895 17:59:51 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:03.153 17:59:51 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:03.153 17:59:51 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:03.153 17:59:51 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:03.153 17:59:51 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:03.153 17:59:51 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:03.153 17:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.153 17:59:51 -- common/autotest_common.sh@10 -- # set +x 00:13:03.153 Malloc1 00:13:03.153 17:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.153 17:59:51 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:03.153 17:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.153 17:59:51 -- common/autotest_common.sh@10 -- # set +x 00:13:03.153 17:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.153 17:59:51 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.153 17:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.153 17:59:51 -- common/autotest_common.sh@10 -- # set +x 00:13:03.153 17:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.153 17:59:51 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:03.153 17:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.153 17:59:51 -- common/autotest_common.sh@10 -- # set +x 00:13:03.153 17:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.153 17:59:51 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.153 17:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.153 17:59:51 -- common/autotest_common.sh@10 -- # set +x 00:13:03.153 [2024-04-15 17:59:51.956332] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.153 17:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.153 17:59:51 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:13:03.153 17:59:51 -- common/autotest_common.sh@638 -- # local es=0 00:13:03.153 17:59:51 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:13:03.153 17:59:51 -- common/autotest_common.sh@626 -- # local arg=nvme 00:13:03.153 17:59:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:03.153 17:59:51 -- common/autotest_common.sh@630 -- # type -t nvme 00:13:03.153 17:59:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:03.154 17:59:51 -- common/autotest_common.sh@632 -- # type -P nvme 00:13:03.154 17:59:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:03.154 17:59:51 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:13:03.154 17:59:51 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:13:03.154 17:59:51 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:13:03.154 [2024-04-15 17:59:51.978877] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:13:03.154 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:03.154 could not add new controller: failed to write to nvme-fabrics device 00:13:03.154 17:59:51 -- common/autotest_common.sh@641 -- # es=1 00:13:03.154 17:59:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:03.154 17:59:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:03.154 17:59:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:03.154 17:59:51 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:03.154 17:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.154 17:59:51 -- common/autotest_common.sh@10 -- # set +x 00:13:03.154 17:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.154 17:59:51 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.731 17:59:52 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.731 17:59:52 -- common/autotest_common.sh@1184 -- # local i=0 00:13:03.731 17:59:52 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.731 17:59:52 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:03.731 17:59:52 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:05.631 17:59:54 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:05.631 17:59:54 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:05.631 17:59:54 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.631 17:59:54 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:05.631 17:59:54 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.631 17:59:54 -- common/autotest_common.sh@1194 -- # return 0 00:13:05.631 17:59:54 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.907 17:59:54 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.907 17:59:54 -- common/autotest_common.sh@1205 -- # local i=0 00:13:05.907 17:59:54 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:05.907 17:59:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.907 17:59:54 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:05.907 17:59:54 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.907 17:59:54 -- common/autotest_common.sh@1217 -- # return 0 00:13:05.907 17:59:54 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:05.907 17:59:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.907 17:59:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.907 17:59:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.907 17:59:54 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.907 17:59:54 -- common/autotest_common.sh@638 -- # local es=0 00:13:05.908 17:59:54 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.908 17:59:54 -- common/autotest_common.sh@626 -- # local arg=nvme 00:13:05.908 17:59:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:05.908 17:59:54 -- common/autotest_common.sh@630 -- # type -t nvme 00:13:05.908 17:59:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:05.908 17:59:54 -- common/autotest_common.sh@632 -- # type -P nvme 00:13:05.908 17:59:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:05.908 17:59:54 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:13:05.908 17:59:54 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:13:05.908 17:59:54 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.908 [2024-04-15 17:59:54.666965] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:13:05.908 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:05.908 could not add new controller: failed to write to nvme-fabrics device 00:13:05.908 17:59:54 -- common/autotest_common.sh@641 -- # es=1 00:13:05.908 17:59:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:05.908 17:59:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:05.908 17:59:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:05.908 17:59:54 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:05.908 17:59:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.908 17:59:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.908 17:59:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.908 17:59:54 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.476 17:59:55 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.476 17:59:55 -- common/autotest_common.sh@1184 -- # local i=0 00:13:06.476 17:59:55 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.476 17:59:55 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:06.476 17:59:55 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:08.378 17:59:57 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:08.378 17:59:57 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:08.378 17:59:57 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.378 17:59:57 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:08.378 17:59:57 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.378 17:59:57 -- common/autotest_common.sh@1194 -- # return 0 00:13:08.378 17:59:57 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.636 17:59:57 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.636 17:59:57 -- common/autotest_common.sh@1205 -- # local i=0 00:13:08.636 17:59:57 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:08.636 17:59:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.636 17:59:57 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:08.636 17:59:57 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.636 17:59:57 -- common/autotest_common.sh@1217 -- # return 0 00:13:08.636 17:59:57 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.636 17:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:08.636 17:59:57 -- common/autotest_common.sh@10 -- # set +x 00:13:08.636 17:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:08.636 17:59:57 -- target/rpc.sh@81 -- # seq 1 5 00:13:08.636 17:59:57 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:08.636 17:59:57 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.636 17:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:08.636 17:59:57 -- common/autotest_common.sh@10 -- # set +x 00:13:08.636 17:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:08.636 17:59:57 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.636 17:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:08.636 17:59:57 -- common/autotest_common.sh@10 -- # set +x 00:13:08.636 [2024-04-15 17:59:57.433253] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.636 17:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:08.636 17:59:57 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:08.636 17:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:08.636 17:59:57 -- common/autotest_common.sh@10 -- # set +x 00:13:08.636 17:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:08.636 17:59:57 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.636 17:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:08.636 17:59:57 -- common/autotest_common.sh@10 -- # set +x 00:13:08.636 17:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:08.636 17:59:57 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:09.203 17:59:58 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:09.203 17:59:58 -- common/autotest_common.sh@1184 -- # local i=0 00:13:09.203 17:59:58 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.203 17:59:58 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:09.203 17:59:58 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:11.735 18:00:00 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:11.735 18:00:00 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:11.735 18:00:00 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.735 18:00:00 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:11.735 18:00:00 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.735 18:00:00 -- common/autotest_common.sh@1194 -- # return 0 00:13:11.735 18:00:00 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.735 18:00:00 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.735 18:00:00 -- common/autotest_common.sh@1205 -- # local i=0 00:13:11.735 18:00:00 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:11.735 18:00:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.735 18:00:00 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:11.735 18:00:00 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.735 18:00:00 -- common/autotest_common.sh@1217 -- # return 0 00:13:11.735 18:00:00 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.735 18:00:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.735 18:00:00 -- common/autotest_common.sh@10 -- # set +x 00:13:11.735 18:00:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.736 18:00:00 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.736 18:00:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.736 18:00:00 -- common/autotest_common.sh@10 -- # set +x 00:13:11.736 18:00:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.736 18:00:00 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:11.736 18:00:00 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.736 18:00:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.736 18:00:00 -- common/autotest_common.sh@10 -- # set +x 00:13:11.736 18:00:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.736 18:00:00 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.736 18:00:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.736 18:00:00 -- common/autotest_common.sh@10 -- # set +x 00:13:11.736 [2024-04-15 18:00:00.225365] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.736 18:00:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.736 18:00:00 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:11.736 18:00:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.736 18:00:00 -- common/autotest_common.sh@10 -- # set +x 00:13:11.736 18:00:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.736 18:00:00 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.736 18:00:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.736 18:00:00 -- common/autotest_common.sh@10 -- # set +x 00:13:11.736 18:00:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.736 18:00:00 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.994 18:00:00 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.994 18:00:00 -- common/autotest_common.sh@1184 -- # local i=0 00:13:11.994 18:00:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.994 18:00:00 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:11.994 18:00:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:14.519 18:00:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:14.519 18:00:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:14.519 18:00:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.519 18:00:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:14.519 18:00:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.519 18:00:02 -- common/autotest_common.sh@1194 -- # return 0 00:13:14.519 18:00:02 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.519 18:00:02 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.519 18:00:02 -- common/autotest_common.sh@1205 -- # local i=0 00:13:14.519 18:00:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:14.519 18:00:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.519 18:00:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:14.519 18:00:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.519 18:00:02 -- common/autotest_common.sh@1217 -- # return 0 00:13:14.519 18:00:02 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.519 18:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:14.519 18:00:02 -- common/autotest_common.sh@10 -- # set +x 00:13:14.519 18:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:14.519 18:00:02 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.519 18:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:14.519 18:00:02 -- common/autotest_common.sh@10 -- # set +x 00:13:14.519 18:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:14.519 18:00:02 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:14.519 18:00:02 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.519 18:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:14.519 18:00:02 -- common/autotest_common.sh@10 -- # set +x 00:13:14.519 18:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:14.519 18:00:02 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.519 18:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:14.519 18:00:02 -- common/autotest_common.sh@10 -- # set +x 00:13:14.519 [2024-04-15 18:00:02.986695] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.519 18:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:14.519 18:00:02 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:14.519 18:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:14.519 18:00:02 -- common/autotest_common.sh@10 -- # set +x 00:13:14.519 18:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:14.519 18:00:02 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.519 18:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:14.519 18:00:02 -- common/autotest_common.sh@10 -- # set +x 00:13:14.519 18:00:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:14.519 18:00:03 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.776 18:00:03 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:14.776 18:00:03 -- common/autotest_common.sh@1184 -- # local i=0 00:13:14.776 18:00:03 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.776 18:00:03 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:14.776 18:00:03 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:16.675 18:00:05 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:16.675 18:00:05 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:16.675 18:00:05 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:16.675 18:00:05 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:16.675 18:00:05 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.675 18:00:05 -- common/autotest_common.sh@1194 -- # return 0 00:13:16.675 18:00:05 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.935 18:00:05 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.935 18:00:05 -- common/autotest_common.sh@1205 -- # local i=0 00:13:16.935 18:00:05 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:16.935 18:00:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.935 18:00:05 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:16.935 18:00:05 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.935 18:00:05 -- common/autotest_common.sh@1217 -- # return 0 00:13:16.935 18:00:05 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.935 18:00:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.935 18:00:05 -- common/autotest_common.sh@10 -- # set +x 00:13:16.935 18:00:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.935 18:00:05 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.935 18:00:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.935 18:00:05 -- common/autotest_common.sh@10 -- # set +x 00:13:16.935 18:00:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.935 18:00:05 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:16.935 18:00:05 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:16.935 18:00:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.935 18:00:05 -- common/autotest_common.sh@10 -- # set +x 00:13:16.935 18:00:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.935 18:00:05 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.935 18:00:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.935 18:00:05 -- common/autotest_common.sh@10 -- # set +x 00:13:16.935 [2024-04-15 18:00:05.680785] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.935 18:00:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.935 18:00:05 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:16.935 18:00:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.935 18:00:05 -- common/autotest_common.sh@10 -- # set +x 00:13:16.935 18:00:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.935 18:00:05 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:16.935 18:00:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.935 18:00:05 -- common/autotest_common.sh@10 -- # set +x 00:13:16.935 18:00:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.935 18:00:05 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.502 18:00:06 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.502 18:00:06 -- common/autotest_common.sh@1184 -- # local i=0 00:13:17.502 18:00:06 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.502 18:00:06 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:17.502 18:00:06 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:19.404 18:00:08 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:19.404 18:00:08 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:19.404 18:00:08 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.404 18:00:08 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:19.404 18:00:08 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.404 18:00:08 -- common/autotest_common.sh@1194 -- # return 0 00:13:19.404 18:00:08 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.662 18:00:08 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:19.662 18:00:08 -- common/autotest_common.sh@1205 -- # local i=0 00:13:19.662 18:00:08 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:19.662 18:00:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.662 18:00:08 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:19.662 18:00:08 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.662 18:00:08 -- common/autotest_common.sh@1217 -- # return 0 00:13:19.662 18:00:08 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.662 18:00:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:19.662 18:00:08 -- common/autotest_common.sh@10 -- # set +x 00:13:19.662 18:00:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:19.662 18:00:08 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.662 18:00:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:19.662 18:00:08 -- common/autotest_common.sh@10 -- # set +x 00:13:19.662 18:00:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:19.662 18:00:08 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:19.662 18:00:08 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:19.662 18:00:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:19.662 18:00:08 -- common/autotest_common.sh@10 -- # set +x 00:13:19.662 18:00:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:19.662 18:00:08 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.662 18:00:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:19.662 18:00:08 -- common/autotest_common.sh@10 -- # set +x 00:13:19.662 [2024-04-15 18:00:08.443451] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.662 18:00:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:19.662 18:00:08 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:19.662 18:00:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:19.662 18:00:08 -- common/autotest_common.sh@10 -- # set +x 00:13:19.662 18:00:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:19.662 18:00:08 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:19.662 18:00:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:19.662 18:00:08 -- common/autotest_common.sh@10 -- # set +x 00:13:19.662 18:00:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:19.662 18:00:08 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:20.229 18:00:09 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:20.229 18:00:09 -- common/autotest_common.sh@1184 -- # local i=0 00:13:20.229 18:00:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:20.229 18:00:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:20.229 18:00:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:22.762 18:00:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:22.762 18:00:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:22.762 18:00:11 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.762 18:00:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:22.762 18:00:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.762 18:00:11 -- common/autotest_common.sh@1194 -- # return 0 00:13:22.762 18:00:11 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.762 18:00:11 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.762 18:00:11 -- common/autotest_common.sh@1205 -- # local i=0 00:13:22.762 18:00:11 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:22.762 18:00:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.762 18:00:11 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:22.762 18:00:11 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.762 18:00:11 -- common/autotest_common.sh@1217 -- # return 0 00:13:22.762 18:00:11 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:22.762 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.762 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.762 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.762 18:00:11 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.762 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.762 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.762 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.762 18:00:11 -- target/rpc.sh@99 -- # seq 1 5 00:13:22.762 18:00:11 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.762 18:00:11 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.762 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.762 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.762 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.762 18:00:11 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.762 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.762 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.762 [2024-04-15 18:00:11.203958] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.762 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.762 18:00:11 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.762 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.762 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.762 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.762 18:00:11 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.762 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.762 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.762 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.762 18:00:11 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.762 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.762 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.762 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.762 18:00:11 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.762 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.762 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.762 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.762 18:00:11 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.762 18:00:11 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.762 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.762 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.762 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.762 18:00:11 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.762 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.762 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.762 [2024-04-15 18:00:11.252050] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.762 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.762 18:00:11 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.762 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.762 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.762 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.762 18:00:11 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.762 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.762 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.762 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.762 18:00:11 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.762 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.762 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.762 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.762 18:00:11 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.762 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.762 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.762 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.762 18:00:11 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.762 18:00:11 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.762 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.762 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.762 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.762 18:00:11 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.762 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.763 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 [2024-04-15 18:00:11.300225] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.763 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.763 18:00:11 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.763 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.763 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.763 18:00:11 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.763 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.763 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.763 18:00:11 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.763 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.763 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.763 18:00:11 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.763 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.763 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.763 18:00:11 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.763 18:00:11 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.763 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.763 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.763 18:00:11 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.763 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.763 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 [2024-04-15 18:00:11.348397] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.763 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.763 18:00:11 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.763 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.763 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.763 18:00:11 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.763 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.763 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.763 18:00:11 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.763 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.763 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.763 18:00:11 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.763 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.763 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.763 18:00:11 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.763 18:00:11 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.763 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.763 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.763 18:00:11 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.763 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.763 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 [2024-04-15 18:00:11.396567] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.763 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.763 18:00:11 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.763 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.763 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.763 18:00:11 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.763 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.763 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.763 18:00:11 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.763 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.763 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.763 18:00:11 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.763 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.763 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.763 18:00:11 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:22.763 18:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.763 18:00:11 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 18:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.763 18:00:11 -- target/rpc.sh@110 -- # stats='{ 00:13:22.763 "tick_rate": 2700000000, 00:13:22.763 "poll_groups": [ 00:13:22.763 { 00:13:22.763 "name": "nvmf_tgt_poll_group_0", 00:13:22.763 "admin_qpairs": 2, 00:13:22.763 "io_qpairs": 84, 00:13:22.763 "current_admin_qpairs": 0, 00:13:22.763 "current_io_qpairs": 0, 00:13:22.763 "pending_bdev_io": 0, 00:13:22.763 "completed_nvme_io": 185, 00:13:22.763 "transports": [ 00:13:22.763 { 00:13:22.763 "trtype": "TCP" 00:13:22.763 } 00:13:22.763 ] 00:13:22.763 }, 00:13:22.763 { 00:13:22.763 "name": "nvmf_tgt_poll_group_1", 00:13:22.763 "admin_qpairs": 2, 00:13:22.763 "io_qpairs": 84, 00:13:22.763 "current_admin_qpairs": 0, 00:13:22.763 "current_io_qpairs": 0, 00:13:22.763 "pending_bdev_io": 0, 00:13:22.763 "completed_nvme_io": 186, 00:13:22.763 "transports": [ 00:13:22.763 { 00:13:22.763 "trtype": "TCP" 00:13:22.763 } 00:13:22.763 ] 00:13:22.763 }, 00:13:22.763 { 00:13:22.763 "name": "nvmf_tgt_poll_group_2", 00:13:22.763 "admin_qpairs": 1, 00:13:22.763 "io_qpairs": 84, 00:13:22.763 "current_admin_qpairs": 0, 00:13:22.763 "current_io_qpairs": 0, 00:13:22.763 "pending_bdev_io": 0, 00:13:22.763 "completed_nvme_io": 162, 00:13:22.763 "transports": [ 00:13:22.763 { 00:13:22.763 "trtype": "TCP" 00:13:22.763 } 00:13:22.763 ] 00:13:22.763 }, 00:13:22.763 { 00:13:22.763 "name": "nvmf_tgt_poll_group_3", 00:13:22.763 "admin_qpairs": 2, 00:13:22.763 "io_qpairs": 84, 00:13:22.763 "current_admin_qpairs": 0, 00:13:22.763 "current_io_qpairs": 0, 00:13:22.763 "pending_bdev_io": 0, 00:13:22.763 "completed_nvme_io": 153, 00:13:22.763 "transports": [ 00:13:22.763 { 00:13:22.763 "trtype": "TCP" 00:13:22.763 } 00:13:22.763 ] 00:13:22.763 } 00:13:22.763 ] 00:13:22.763 }' 00:13:22.763 18:00:11 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:22.763 18:00:11 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:22.763 18:00:11 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:22.763 18:00:11 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:22.763 18:00:11 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:22.763 18:00:11 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:22.763 18:00:11 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:22.763 18:00:11 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:22.763 18:00:11 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:22.763 18:00:11 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:22.763 18:00:11 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:22.763 18:00:11 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:22.763 18:00:11 -- target/rpc.sh@123 -- # nvmftestfini 00:13:22.763 18:00:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:22.763 18:00:11 -- nvmf/common.sh@117 -- # sync 00:13:22.763 18:00:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:22.763 18:00:11 -- nvmf/common.sh@120 -- # set +e 00:13:22.763 18:00:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:22.763 18:00:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:22.763 rmmod nvme_tcp 00:13:22.763 rmmod nvme_fabrics 00:13:22.763 rmmod nvme_keyring 00:13:22.763 18:00:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:22.763 18:00:11 -- nvmf/common.sh@124 -- # set -e 00:13:22.763 18:00:11 -- nvmf/common.sh@125 -- # return 0 00:13:22.763 18:00:11 -- nvmf/common.sh@478 -- # '[' -n 3261985 ']' 00:13:22.763 18:00:11 -- nvmf/common.sh@479 -- # killprocess 3261985 00:13:22.763 18:00:11 -- common/autotest_common.sh@936 -- # '[' -z 3261985 ']' 00:13:22.763 18:00:11 -- common/autotest_common.sh@940 -- # kill -0 3261985 00:13:22.763 18:00:11 -- common/autotest_common.sh@941 -- # uname 00:13:22.763 18:00:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:22.763 18:00:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3261985 00:13:22.763 18:00:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:22.763 18:00:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:22.763 18:00:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3261985' 00:13:22.763 killing process with pid 3261985 00:13:22.763 18:00:11 -- common/autotest_common.sh@955 -- # kill 3261985 00:13:22.763 18:00:11 -- common/autotest_common.sh@960 -- # wait 3261985 00:13:23.031 18:00:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:23.031 18:00:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:23.031 18:00:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:23.031 18:00:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.031 18:00:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:23.031 18:00:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.031 18:00:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.031 18:00:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.005 18:00:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:25.005 00:13:25.005 real 0m25.180s 00:13:25.005 user 1m21.001s 00:13:25.005 sys 0m4.304s 00:13:25.005 18:00:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:25.005 18:00:13 -- common/autotest_common.sh@10 -- # set +x 00:13:25.005 ************************************ 00:13:25.005 END TEST nvmf_rpc 00:13:25.005 ************************************ 00:13:25.005 18:00:13 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:25.005 18:00:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:25.005 18:00:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:25.005 18:00:13 -- common/autotest_common.sh@10 -- # set +x 00:13:25.266 ************************************ 00:13:25.266 START TEST nvmf_invalid 00:13:25.266 ************************************ 00:13:25.266 18:00:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:25.266 * Looking for test storage... 00:13:25.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.266 18:00:14 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.266 18:00:14 -- nvmf/common.sh@7 -- # uname -s 00:13:25.266 18:00:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.266 18:00:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.266 18:00:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.266 18:00:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.266 18:00:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.266 18:00:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.266 18:00:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.266 18:00:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.266 18:00:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.266 18:00:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.266 18:00:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:25.266 18:00:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:25.266 18:00:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.266 18:00:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.266 18:00:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.266 18:00:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.266 18:00:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.266 18:00:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.266 18:00:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.267 18:00:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.267 18:00:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.267 18:00:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.267 18:00:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.267 18:00:14 -- paths/export.sh@5 -- # export PATH 00:13:25.267 18:00:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.267 18:00:14 -- nvmf/common.sh@47 -- # : 0 00:13:25.267 18:00:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:25.267 18:00:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:25.267 18:00:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.267 18:00:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.267 18:00:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.267 18:00:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:25.267 18:00:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:25.267 18:00:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:25.267 18:00:14 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:25.267 18:00:14 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.267 18:00:14 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:25.267 18:00:14 -- target/invalid.sh@14 -- # target=foobar 00:13:25.267 18:00:14 -- target/invalid.sh@16 -- # RANDOM=0 00:13:25.267 18:00:14 -- target/invalid.sh@34 -- # nvmftestinit 00:13:25.267 18:00:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:25.267 18:00:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.267 18:00:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:25.267 18:00:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:25.267 18:00:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:25.267 18:00:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.267 18:00:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.267 18:00:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.267 18:00:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:25.267 18:00:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:25.267 18:00:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:25.267 18:00:14 -- common/autotest_common.sh@10 -- # set +x 00:13:27.801 18:00:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:27.801 18:00:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:27.801 18:00:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:27.801 18:00:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:27.801 18:00:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:27.801 18:00:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:27.801 18:00:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:27.801 18:00:16 -- nvmf/common.sh@295 -- # net_devs=() 00:13:27.801 18:00:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:27.801 18:00:16 -- nvmf/common.sh@296 -- # e810=() 00:13:27.801 18:00:16 -- nvmf/common.sh@296 -- # local -ga e810 00:13:27.801 18:00:16 -- nvmf/common.sh@297 -- # x722=() 00:13:27.801 18:00:16 -- nvmf/common.sh@297 -- # local -ga x722 00:13:27.801 18:00:16 -- nvmf/common.sh@298 -- # mlx=() 00:13:27.801 18:00:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:27.801 18:00:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.801 18:00:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.801 18:00:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.801 18:00:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.801 18:00:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.801 18:00:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.801 18:00:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.801 18:00:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.801 18:00:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.801 18:00:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.801 18:00:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.801 18:00:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:27.801 18:00:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:27.801 18:00:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:27.801 18:00:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.801 18:00:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:27.801 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:27.801 18:00:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.801 18:00:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:27.801 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:27.801 18:00:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:27.801 18:00:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.801 18:00:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.801 18:00:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:27.801 18:00:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.801 18:00:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:27.801 Found net devices under 0000:84:00.0: cvl_0_0 00:13:27.801 18:00:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.801 18:00:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.801 18:00:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.801 18:00:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:27.801 18:00:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.801 18:00:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:27.801 Found net devices under 0000:84:00.1: cvl_0_1 00:13:27.801 18:00:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.801 18:00:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:27.801 18:00:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:27.801 18:00:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:27.801 18:00:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.801 18:00:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.801 18:00:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.801 18:00:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:27.801 18:00:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.801 18:00:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.801 18:00:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:27.801 18:00:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.801 18:00:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.801 18:00:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:27.801 18:00:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:27.801 18:00:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.801 18:00:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.801 18:00:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.801 18:00:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.801 18:00:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:27.801 18:00:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.801 18:00:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.801 18:00:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.801 18:00:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:27.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:13:27.801 00:13:27.801 --- 10.0.0.2 ping statistics --- 00:13:27.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.801 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:13:27.801 18:00:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:13:27.801 00:13:27.801 --- 10.0.0.1 ping statistics --- 00:13:27.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.801 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:13:27.801 18:00:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.801 18:00:16 -- nvmf/common.sh@411 -- # return 0 00:13:27.801 18:00:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:27.801 18:00:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.801 18:00:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:27.801 18:00:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.801 18:00:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:27.801 18:00:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:27.801 18:00:16 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:27.801 18:00:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:27.801 18:00:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:27.801 18:00:16 -- common/autotest_common.sh@10 -- # set +x 00:13:27.801 18:00:16 -- nvmf/common.sh@470 -- # nvmfpid=3267113 00:13:27.801 18:00:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:27.802 18:00:16 -- nvmf/common.sh@471 -- # waitforlisten 3267113 00:13:27.802 18:00:16 -- common/autotest_common.sh@817 -- # '[' -z 3267113 ']' 00:13:27.802 18:00:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.802 18:00:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:27.802 18:00:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.802 18:00:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:27.802 18:00:16 -- common/autotest_common.sh@10 -- # set +x 00:13:27.802 [2024-04-15 18:00:16.677010] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:13:27.802 [2024-04-15 18:00:16.677108] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.802 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.802 [2024-04-15 18:00:16.754046] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.060 [2024-04-15 18:00:16.852733] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.060 [2024-04-15 18:00:16.852802] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.060 [2024-04-15 18:00:16.852828] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.060 [2024-04-15 18:00:16.852850] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.060 [2024-04-15 18:00:16.852869] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.060 [2024-04-15 18:00:16.852975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.060 [2024-04-15 18:00:16.853035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.060 [2024-04-15 18:00:16.853093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.060 [2024-04-15 18:00:16.853100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.060 18:00:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:28.060 18:00:16 -- common/autotest_common.sh@850 -- # return 0 00:13:28.060 18:00:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:28.060 18:00:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:28.060 18:00:16 -- common/autotest_common.sh@10 -- # set +x 00:13:28.318 18:00:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.318 18:00:17 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:28.318 18:00:17 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode9674 00:13:28.577 [2024-04-15 18:00:17.327938] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:28.577 18:00:17 -- target/invalid.sh@40 -- # out='request: 00:13:28.577 { 00:13:28.577 "nqn": "nqn.2016-06.io.spdk:cnode9674", 00:13:28.577 "tgt_name": "foobar", 00:13:28.577 "method": "nvmf_create_subsystem", 00:13:28.577 "req_id": 1 00:13:28.577 } 00:13:28.577 Got JSON-RPC error response 00:13:28.577 response: 00:13:28.577 { 00:13:28.577 "code": -32603, 00:13:28.577 "message": "Unable to find target foobar" 00:13:28.577 }' 00:13:28.577 18:00:17 -- target/invalid.sh@41 -- # [[ request: 00:13:28.577 { 00:13:28.577 "nqn": "nqn.2016-06.io.spdk:cnode9674", 00:13:28.577 "tgt_name": "foobar", 00:13:28.577 "method": "nvmf_create_subsystem", 00:13:28.577 "req_id": 1 00:13:28.577 } 00:13:28.577 Got JSON-RPC error response 00:13:28.577 response: 00:13:28.577 { 00:13:28.577 "code": -32603, 00:13:28.577 "message": "Unable to find target foobar" 00:13:28.577 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:28.577 18:00:17 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:28.577 18:00:17 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27928 00:13:28.835 [2024-04-15 18:00:17.673141] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27928: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:28.835 18:00:17 -- target/invalid.sh@45 -- # out='request: 00:13:28.835 { 00:13:28.835 "nqn": "nqn.2016-06.io.spdk:cnode27928", 00:13:28.835 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:28.835 "method": "nvmf_create_subsystem", 00:13:28.835 "req_id": 1 00:13:28.835 } 00:13:28.835 Got JSON-RPC error response 00:13:28.835 response: 00:13:28.835 { 00:13:28.835 "code": -32602, 00:13:28.835 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:28.835 }' 00:13:28.835 18:00:17 -- target/invalid.sh@46 -- # [[ request: 00:13:28.835 { 00:13:28.835 "nqn": "nqn.2016-06.io.spdk:cnode27928", 00:13:28.835 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:28.835 "method": "nvmf_create_subsystem", 00:13:28.835 "req_id": 1 00:13:28.835 } 00:13:28.835 Got JSON-RPC error response 00:13:28.835 response: 00:13:28.835 { 00:13:28.835 "code": -32602, 00:13:28.835 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:28.835 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:28.835 18:00:17 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:28.835 18:00:17 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27415 00:13:29.093 [2024-04-15 18:00:17.970125] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27415: invalid model number 'SPDK_Controller' 00:13:29.093 18:00:17 -- target/invalid.sh@50 -- # out='request: 00:13:29.093 { 00:13:29.093 "nqn": "nqn.2016-06.io.spdk:cnode27415", 00:13:29.093 "model_number": "SPDK_Controller\u001f", 00:13:29.093 "method": "nvmf_create_subsystem", 00:13:29.093 "req_id": 1 00:13:29.093 } 00:13:29.093 Got JSON-RPC error response 00:13:29.093 response: 00:13:29.093 { 00:13:29.093 "code": -32602, 00:13:29.093 "message": "Invalid MN SPDK_Controller\u001f" 00:13:29.093 }' 00:13:29.093 18:00:17 -- target/invalid.sh@51 -- # [[ request: 00:13:29.093 { 00:13:29.093 "nqn": "nqn.2016-06.io.spdk:cnode27415", 00:13:29.093 "model_number": "SPDK_Controller\u001f", 00:13:29.093 "method": "nvmf_create_subsystem", 00:13:29.093 "req_id": 1 00:13:29.093 } 00:13:29.093 Got JSON-RPC error response 00:13:29.093 response: 00:13:29.093 { 00:13:29.093 "code": -32602, 00:13:29.093 "message": "Invalid MN SPDK_Controller\u001f" 00:13:29.093 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:29.093 18:00:17 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:29.093 18:00:17 -- target/invalid.sh@19 -- # local length=21 ll 00:13:29.093 18:00:17 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:29.093 18:00:17 -- target/invalid.sh@21 -- # local chars 00:13:29.093 18:00:17 -- target/invalid.sh@22 -- # local string 00:13:29.093 18:00:17 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:29.093 18:00:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.093 18:00:17 -- target/invalid.sh@25 -- # printf %x 41 00:13:29.093 18:00:17 -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # string+=')' 00:13:29.093 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.093 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # printf %x 53 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # string+=5 00:13:29.093 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.093 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # printf %x 113 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # string+=q 00:13:29.093 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.093 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # printf %x 115 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # string+=s 00:13:29.093 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.093 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # printf %x 68 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # string+=D 00:13:29.093 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.093 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # printf %x 120 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # string+=x 00:13:29.093 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.093 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # printf %x 111 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # string+=o 00:13:29.093 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.093 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # printf %x 53 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # string+=5 00:13:29.093 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.093 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # printf %x 81 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:29.093 18:00:18 -- target/invalid.sh@25 -- # string+=Q 00:13:29.094 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.094 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.094 18:00:18 -- target/invalid.sh@25 -- # printf %x 121 00:13:29.094 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:29.094 18:00:18 -- target/invalid.sh@25 -- # string+=y 00:13:29.094 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.094 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.094 18:00:18 -- target/invalid.sh@25 -- # printf %x 114 00:13:29.094 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:29.094 18:00:18 -- target/invalid.sh@25 -- # string+=r 00:13:29.094 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.094 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.094 18:00:18 -- target/invalid.sh@25 -- # printf %x 61 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # string+== 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # printf %x 89 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # string+=Y 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # printf %x 117 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # string+=u 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # printf %x 122 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # string+=z 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # printf %x 37 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # string+=% 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # printf %x 89 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # string+=Y 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # printf %x 71 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # string+=G 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # printf %x 50 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # string+=2 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # printf %x 62 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # string+='>' 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # printf %x 124 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:29.352 18:00:18 -- target/invalid.sh@25 -- # string+='|' 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.352 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.352 18:00:18 -- target/invalid.sh@28 -- # [[ ) == \- ]] 00:13:29.352 18:00:18 -- target/invalid.sh@31 -- # echo ')5qsDxo5Qyr=Yuz%YG2>|' 00:13:29.352 18:00:18 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ')5qsDxo5Qyr=Yuz%YG2>|' nqn.2016-06.io.spdk:cnode3234 00:13:29.611 [2024-04-15 18:00:18.359401] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3234: invalid serial number ')5qsDxo5Qyr=Yuz%YG2>|' 00:13:29.611 18:00:18 -- target/invalid.sh@54 -- # out='request: 00:13:29.611 { 00:13:29.611 "nqn": "nqn.2016-06.io.spdk:cnode3234", 00:13:29.611 "serial_number": ")5qsDxo5Qyr=Yuz%YG2>|", 00:13:29.611 "method": "nvmf_create_subsystem", 00:13:29.611 "req_id": 1 00:13:29.611 } 00:13:29.611 Got JSON-RPC error response 00:13:29.611 response: 00:13:29.611 { 00:13:29.611 "code": -32602, 00:13:29.611 "message": "Invalid SN )5qsDxo5Qyr=Yuz%YG2>|" 00:13:29.611 }' 00:13:29.611 18:00:18 -- target/invalid.sh@55 -- # [[ request: 00:13:29.611 { 00:13:29.611 "nqn": "nqn.2016-06.io.spdk:cnode3234", 00:13:29.611 "serial_number": ")5qsDxo5Qyr=Yuz%YG2>|", 00:13:29.611 "method": "nvmf_create_subsystem", 00:13:29.611 "req_id": 1 00:13:29.611 } 00:13:29.611 Got JSON-RPC error response 00:13:29.611 response: 00:13:29.611 { 00:13:29.611 "code": -32602, 00:13:29.611 "message": "Invalid SN )5qsDxo5Qyr=Yuz%YG2>|" 00:13:29.611 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:29.611 18:00:18 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:29.611 18:00:18 -- target/invalid.sh@19 -- # local length=41 ll 00:13:29.611 18:00:18 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:29.611 18:00:18 -- target/invalid.sh@21 -- # local chars 00:13:29.611 18:00:18 -- target/invalid.sh@22 -- # local string 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 124 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+='|' 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 75 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=K 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 81 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=Q 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 53 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=5 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 44 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=, 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 48 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=0 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 57 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=9 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 91 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+='[' 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 114 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=r 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 115 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=s 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 65 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=A 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 86 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=V 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 33 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+='!' 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 95 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=_ 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 60 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+='<' 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 110 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=n 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 44 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=, 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 40 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+='(' 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 112 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=p 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 73 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=I 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 89 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=Y 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 49 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=1 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 118 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=v 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 53 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=5 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 33 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+='!' 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 79 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=O 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # printf %x 84 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:29.611 18:00:18 -- target/invalid.sh@25 -- # string+=T 00:13:29.611 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # printf %x 70 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # string+=F 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # printf %x 92 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # string+='\' 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # printf %x 106 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # string+=j 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # printf %x 121 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # string+=y 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # printf %x 80 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # string+=P 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # printf %x 32 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # string+=' ' 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # printf %x 80 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # string+=P 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # printf %x 37 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # string+=% 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # printf %x 96 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # string+='`' 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # printf %x 73 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # string+=I 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # printf %x 48 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # string+=0 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # printf %x 34 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # string+='"' 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.612 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # printf %x 64 00:13:29.612 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:29.870 18:00:18 -- target/invalid.sh@25 -- # string+=@ 00:13:29.870 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.870 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.870 18:00:18 -- target/invalid.sh@25 -- # printf %x 110 00:13:29.870 18:00:18 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:29.870 18:00:18 -- target/invalid.sh@25 -- # string+=n 00:13:29.870 18:00:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.870 18:00:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.870 18:00:18 -- target/invalid.sh@28 -- # [[ | == \- ]] 00:13:29.870 18:00:18 -- target/invalid.sh@31 -- # echo '|KQ5,09[rsAV!_ /dev/null' 00:13:33.665 18:00:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.227 18:00:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:36.227 00:13:36.227 real 0m10.537s 00:13:36.227 user 0m27.758s 00:13:36.227 sys 0m2.971s 00:13:36.227 18:00:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:36.227 18:00:24 -- common/autotest_common.sh@10 -- # set +x 00:13:36.227 ************************************ 00:13:36.227 END TEST nvmf_invalid 00:13:36.227 ************************************ 00:13:36.227 18:00:24 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:36.227 18:00:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:36.227 18:00:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:36.227 18:00:24 -- common/autotest_common.sh@10 -- # set +x 00:13:36.227 ************************************ 00:13:36.227 START TEST nvmf_abort 00:13:36.227 ************************************ 00:13:36.227 18:00:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:36.227 * Looking for test storage... 00:13:36.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.227 18:00:24 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.227 18:00:24 -- nvmf/common.sh@7 -- # uname -s 00:13:36.227 18:00:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.227 18:00:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.227 18:00:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.227 18:00:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.227 18:00:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.227 18:00:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.227 18:00:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.227 18:00:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.227 18:00:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.227 18:00:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.227 18:00:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:36.227 18:00:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:36.227 18:00:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.227 18:00:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.227 18:00:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.227 18:00:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.227 18:00:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.227 18:00:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.227 18:00:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.227 18:00:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.227 18:00:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.228 18:00:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.228 18:00:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.228 18:00:24 -- paths/export.sh@5 -- # export PATH 00:13:36.228 18:00:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.228 18:00:24 -- nvmf/common.sh@47 -- # : 0 00:13:36.228 18:00:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.228 18:00:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.228 18:00:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.228 18:00:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.228 18:00:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.228 18:00:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.228 18:00:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.228 18:00:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.228 18:00:24 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:36.228 18:00:24 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:36.228 18:00:24 -- target/abort.sh@14 -- # nvmftestinit 00:13:36.228 18:00:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:36.228 18:00:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.228 18:00:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:36.228 18:00:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:36.228 18:00:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:36.228 18:00:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.228 18:00:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.228 18:00:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.228 18:00:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:36.228 18:00:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:36.228 18:00:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:36.228 18:00:24 -- common/autotest_common.sh@10 -- # set +x 00:13:38.784 18:00:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:38.784 18:00:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:38.784 18:00:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:38.784 18:00:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:38.784 18:00:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:38.784 18:00:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:38.784 18:00:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:38.784 18:00:27 -- nvmf/common.sh@295 -- # net_devs=() 00:13:38.784 18:00:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:38.784 18:00:27 -- nvmf/common.sh@296 -- # e810=() 00:13:38.784 18:00:27 -- nvmf/common.sh@296 -- # local -ga e810 00:13:38.784 18:00:27 -- nvmf/common.sh@297 -- # x722=() 00:13:38.784 18:00:27 -- nvmf/common.sh@297 -- # local -ga x722 00:13:38.784 18:00:27 -- nvmf/common.sh@298 -- # mlx=() 00:13:38.784 18:00:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:38.784 18:00:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:38.784 18:00:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:38.784 18:00:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:38.784 18:00:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:38.784 18:00:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:38.784 18:00:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:38.784 18:00:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:38.784 18:00:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:38.784 18:00:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:38.784 18:00:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:38.784 18:00:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:38.784 18:00:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:38.784 18:00:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:38.784 18:00:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:38.784 18:00:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:38.784 18:00:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:38.784 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:38.784 18:00:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:38.784 18:00:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:38.784 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:38.784 18:00:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:38.784 18:00:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:38.784 18:00:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.784 18:00:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:38.784 18:00:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.784 18:00:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:38.784 Found net devices under 0000:84:00.0: cvl_0_0 00:13:38.784 18:00:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.784 18:00:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:38.784 18:00:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.784 18:00:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:38.784 18:00:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.784 18:00:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:38.784 Found net devices under 0000:84:00.1: cvl_0_1 00:13:38.784 18:00:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.784 18:00:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:38.784 18:00:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:38.784 18:00:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:38.784 18:00:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:38.784 18:00:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:38.784 18:00:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:38.784 18:00:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:38.784 18:00:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:38.784 18:00:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:38.784 18:00:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:38.784 18:00:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:38.784 18:00:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:38.784 18:00:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:38.784 18:00:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:38.784 18:00:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:38.784 18:00:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:38.784 18:00:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:38.784 18:00:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:38.784 18:00:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:38.784 18:00:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:38.784 18:00:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:38.784 18:00:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:38.784 18:00:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:38.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:38.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:13:38.784 00:13:38.784 --- 10.0.0.2 ping statistics --- 00:13:38.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.784 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:13:38.784 18:00:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:38.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:38.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:13:38.784 00:13:38.784 --- 10.0.0.1 ping statistics --- 00:13:38.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.784 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:13:38.784 18:00:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:38.784 18:00:27 -- nvmf/common.sh@411 -- # return 0 00:13:38.784 18:00:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:38.784 18:00:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:38.784 18:00:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:38.784 18:00:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:38.784 18:00:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:38.784 18:00:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:38.784 18:00:27 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:38.784 18:00:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:38.784 18:00:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:38.784 18:00:27 -- common/autotest_common.sh@10 -- # set +x 00:13:38.784 18:00:27 -- nvmf/common.sh@470 -- # nvmfpid=3269904 00:13:38.784 18:00:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:38.784 18:00:27 -- nvmf/common.sh@471 -- # waitforlisten 3269904 00:13:38.784 18:00:27 -- common/autotest_common.sh@817 -- # '[' -z 3269904 ']' 00:13:38.784 18:00:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.784 18:00:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:38.784 18:00:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.784 18:00:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:38.784 18:00:27 -- common/autotest_common.sh@10 -- # set +x 00:13:38.784 [2024-04-15 18:00:27.382915] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:13:38.784 [2024-04-15 18:00:27.383014] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.784 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.784 [2024-04-15 18:00:27.461978] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:38.784 [2024-04-15 18:00:27.559124] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.784 [2024-04-15 18:00:27.559205] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.784 [2024-04-15 18:00:27.559222] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.784 [2024-04-15 18:00:27.559236] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.784 [2024-04-15 18:00:27.559248] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.784 [2024-04-15 18:00:27.559347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.784 [2024-04-15 18:00:27.559400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.784 [2024-04-15 18:00:27.559403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.784 18:00:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:38.784 18:00:27 -- common/autotest_common.sh@850 -- # return 0 00:13:38.784 18:00:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:38.784 18:00:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:38.784 18:00:27 -- common/autotest_common.sh@10 -- # set +x 00:13:39.043 18:00:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.043 18:00:27 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:39.043 18:00:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.043 18:00:27 -- common/autotest_common.sh@10 -- # set +x 00:13:39.043 [2024-04-15 18:00:27.755405] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.043 18:00:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.043 18:00:27 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:39.043 18:00:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.043 18:00:27 -- common/autotest_common.sh@10 -- # set +x 00:13:39.043 Malloc0 00:13:39.043 18:00:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.043 18:00:27 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:39.043 18:00:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.043 18:00:27 -- common/autotest_common.sh@10 -- # set +x 00:13:39.043 Delay0 00:13:39.043 18:00:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.043 18:00:27 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:39.043 18:00:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.043 18:00:27 -- common/autotest_common.sh@10 -- # set +x 00:13:39.043 18:00:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.043 18:00:27 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:39.043 18:00:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.043 18:00:27 -- common/autotest_common.sh@10 -- # set +x 00:13:39.043 18:00:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.043 18:00:27 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:39.043 18:00:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.043 18:00:27 -- common/autotest_common.sh@10 -- # set +x 00:13:39.043 [2024-04-15 18:00:27.823250] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.043 18:00:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.043 18:00:27 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:39.043 18:00:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.043 18:00:27 -- common/autotest_common.sh@10 -- # set +x 00:13:39.043 18:00:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.043 18:00:27 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:39.043 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.043 [2024-04-15 18:00:27.929276] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:41.634 Initializing NVMe Controllers 00:13:41.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:41.634 controller IO queue size 128 less than required 00:13:41.634 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:41.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:41.634 Initialization complete. Launching workers. 00:13:41.634 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32870 00:13:41.634 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32931, failed to submit 62 00:13:41.634 success 32874, unsuccess 57, failed 0 00:13:41.634 18:00:29 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:41.634 18:00:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.634 18:00:29 -- common/autotest_common.sh@10 -- # set +x 00:13:41.634 18:00:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.634 18:00:29 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:41.634 18:00:29 -- target/abort.sh@38 -- # nvmftestfini 00:13:41.634 18:00:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:41.634 18:00:29 -- nvmf/common.sh@117 -- # sync 00:13:41.634 18:00:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:41.634 18:00:29 -- nvmf/common.sh@120 -- # set +e 00:13:41.634 18:00:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:41.634 18:00:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:41.634 rmmod nvme_tcp 00:13:41.634 rmmod nvme_fabrics 00:13:41.634 rmmod nvme_keyring 00:13:41.634 18:00:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:41.634 18:00:30 -- nvmf/common.sh@124 -- # set -e 00:13:41.634 18:00:30 -- nvmf/common.sh@125 -- # return 0 00:13:41.634 18:00:30 -- nvmf/common.sh@478 -- # '[' -n 3269904 ']' 00:13:41.634 18:00:30 -- nvmf/common.sh@479 -- # killprocess 3269904 00:13:41.634 18:00:30 -- common/autotest_common.sh@936 -- # '[' -z 3269904 ']' 00:13:41.634 18:00:30 -- common/autotest_common.sh@940 -- # kill -0 3269904 00:13:41.634 18:00:30 -- common/autotest_common.sh@941 -- # uname 00:13:41.634 18:00:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:41.634 18:00:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3269904 00:13:41.634 18:00:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:41.634 18:00:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:41.634 18:00:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3269904' 00:13:41.634 killing process with pid 3269904 00:13:41.634 18:00:30 -- common/autotest_common.sh@955 -- # kill 3269904 00:13:41.634 18:00:30 -- common/autotest_common.sh@960 -- # wait 3269904 00:13:41.634 18:00:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:41.634 18:00:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:41.634 18:00:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:41.635 18:00:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:41.635 18:00:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:41.635 18:00:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.635 18:00:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.635 18:00:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.540 18:00:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:43.540 00:13:43.540 real 0m7.597s 00:13:43.540 user 0m10.617s 00:13:43.540 sys 0m2.846s 00:13:43.540 18:00:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:43.540 18:00:32 -- common/autotest_common.sh@10 -- # set +x 00:13:43.540 ************************************ 00:13:43.540 END TEST nvmf_abort 00:13:43.540 ************************************ 00:13:43.540 18:00:32 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:43.540 18:00:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:43.540 18:00:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:43.540 18:00:32 -- common/autotest_common.sh@10 -- # set +x 00:13:43.799 ************************************ 00:13:43.799 START TEST nvmf_ns_hotplug_stress 00:13:43.799 ************************************ 00:13:43.799 18:00:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:43.799 * Looking for test storage... 00:13:43.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.799 18:00:32 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.799 18:00:32 -- nvmf/common.sh@7 -- # uname -s 00:13:43.799 18:00:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.799 18:00:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.799 18:00:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.799 18:00:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.799 18:00:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.799 18:00:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.799 18:00:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.799 18:00:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.799 18:00:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.799 18:00:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.799 18:00:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:43.799 18:00:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:43.799 18:00:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.799 18:00:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.799 18:00:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.799 18:00:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.799 18:00:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.799 18:00:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.799 18:00:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.799 18:00:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.799 18:00:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.799 18:00:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.799 18:00:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.799 18:00:32 -- paths/export.sh@5 -- # export PATH 00:13:43.799 18:00:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.799 18:00:32 -- nvmf/common.sh@47 -- # : 0 00:13:43.799 18:00:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.799 18:00:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.799 18:00:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.799 18:00:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.799 18:00:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.799 18:00:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.799 18:00:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.799 18:00:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.799 18:00:32 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:43.799 18:00:32 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:13:43.799 18:00:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:43.799 18:00:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.799 18:00:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:43.799 18:00:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:43.799 18:00:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:43.799 18:00:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.799 18:00:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.799 18:00:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.799 18:00:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:43.799 18:00:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:43.799 18:00:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:43.799 18:00:32 -- common/autotest_common.sh@10 -- # set +x 00:13:46.336 18:00:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:46.336 18:00:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:46.336 18:00:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:46.336 18:00:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:46.336 18:00:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:46.336 18:00:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:46.336 18:00:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:46.336 18:00:34 -- nvmf/common.sh@295 -- # net_devs=() 00:13:46.336 18:00:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:46.336 18:00:34 -- nvmf/common.sh@296 -- # e810=() 00:13:46.336 18:00:34 -- nvmf/common.sh@296 -- # local -ga e810 00:13:46.336 18:00:34 -- nvmf/common.sh@297 -- # x722=() 00:13:46.336 18:00:34 -- nvmf/common.sh@297 -- # local -ga x722 00:13:46.336 18:00:34 -- nvmf/common.sh@298 -- # mlx=() 00:13:46.336 18:00:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:46.336 18:00:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:46.336 18:00:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:46.336 18:00:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:46.336 18:00:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:46.336 18:00:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:46.336 18:00:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:46.336 18:00:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:46.336 18:00:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:46.336 18:00:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:46.336 18:00:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:46.336 18:00:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:46.336 18:00:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:46.336 18:00:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:46.336 18:00:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:46.336 18:00:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:46.336 18:00:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:46.336 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:46.336 18:00:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:46.336 18:00:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:46.336 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:46.336 18:00:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:46.336 18:00:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:46.336 18:00:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.336 18:00:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:46.336 18:00:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.336 18:00:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:46.336 Found net devices under 0000:84:00.0: cvl_0_0 00:13:46.336 18:00:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.336 18:00:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:46.336 18:00:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.336 18:00:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:46.336 18:00:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.336 18:00:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:46.336 Found net devices under 0000:84:00.1: cvl_0_1 00:13:46.336 18:00:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.336 18:00:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:46.336 18:00:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:46.336 18:00:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:46.336 18:00:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.336 18:00:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:46.336 18:00:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:46.336 18:00:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:46.336 18:00:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:46.336 18:00:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:46.336 18:00:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:46.336 18:00:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:46.336 18:00:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.336 18:00:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:46.336 18:00:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:46.336 18:00:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:46.336 18:00:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:46.336 18:00:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:46.336 18:00:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:46.336 18:00:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:46.336 18:00:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:46.336 18:00:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:46.336 18:00:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:46.336 18:00:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:46.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:13:46.336 00:13:46.336 --- 10.0.0.2 ping statistics --- 00:13:46.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.336 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:13:46.336 18:00:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:46.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:13:46.336 00:13:46.336 --- 10.0.0.1 ping statistics --- 00:13:46.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.336 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:13:46.336 18:00:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.336 18:00:34 -- nvmf/common.sh@411 -- # return 0 00:13:46.336 18:00:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:46.336 18:00:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.336 18:00:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:46.336 18:00:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.336 18:00:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:46.336 18:00:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:46.336 18:00:34 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:13:46.336 18:00:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:46.336 18:00:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:46.336 18:00:34 -- common/autotest_common.sh@10 -- # set +x 00:13:46.336 18:00:35 -- nvmf/common.sh@470 -- # nvmfpid=3272279 00:13:46.336 18:00:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:46.336 18:00:35 -- nvmf/common.sh@471 -- # waitforlisten 3272279 00:13:46.336 18:00:35 -- common/autotest_common.sh@817 -- # '[' -z 3272279 ']' 00:13:46.336 18:00:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.336 18:00:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:46.336 18:00:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.337 18:00:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:46.337 18:00:35 -- common/autotest_common.sh@10 -- # set +x 00:13:46.337 [2024-04-15 18:00:35.052842] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:13:46.337 [2024-04-15 18:00:35.052930] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.337 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.337 [2024-04-15 18:00:35.131686] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:46.337 [2024-04-15 18:00:35.224087] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.337 [2024-04-15 18:00:35.224150] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.337 [2024-04-15 18:00:35.224167] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.337 [2024-04-15 18:00:35.224181] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.337 [2024-04-15 18:00:35.224194] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.337 [2024-04-15 18:00:35.224257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.337 [2024-04-15 18:00:35.224312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.337 [2024-04-15 18:00:35.224315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.595 18:00:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:46.595 18:00:35 -- common/autotest_common.sh@850 -- # return 0 00:13:46.595 18:00:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:46.595 18:00:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:46.595 18:00:35 -- common/autotest_common.sh@10 -- # set +x 00:13:46.595 18:00:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.595 18:00:35 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:13:46.595 18:00:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:46.855 [2024-04-15 18:00:35.645767] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.855 18:00:35 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:47.421 18:00:36 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.989 [2024-04-15 18:00:36.767162] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.989 18:00:36 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:48.248 18:00:37 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:48.817 Malloc0 00:13:49.074 18:00:37 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:49.331 Delay0 00:13:49.331 18:00:38 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.589 18:00:38 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:50.160 NULL1 00:13:50.160 18:00:39 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:50.419 18:00:39 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=3272708 00:13:50.419 18:00:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:13:50.419 18:00:39 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:50.419 18:00:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.679 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.055 Read completed with error (sct=0, sc=11) 00:13:52.055 18:00:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.055 18:00:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:13:52.055 18:00:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:52.623 true 00:13:52.623 18:00:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:13:52.623 18:00:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.194 18:00:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.708 18:00:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:13:53.708 18:00:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:53.965 true 00:13:53.965 18:00:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:13:53.966 18:00:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.898 18:00:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.898 18:00:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:13:54.898 18:00:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:55.469 true 00:13:55.469 18:00:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:13:55.469 18:00:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.041 18:00:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.319 18:00:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:13:56.319 18:00:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:56.586 true 00:13:56.586 18:00:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:13:56.586 18:00:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.487 18:00:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.745 18:00:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:13:58.745 18:00:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:59.002 true 00:13:59.002 18:00:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:13:59.002 18:00:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.569 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.569 18:00:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.569 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.084 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.084 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.084 18:00:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:14:00.084 18:00:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:00.341 true 00:14:00.341 18:00:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:14:00.341 18:00:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.276 18:00:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.276 18:00:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:14:01.276 18:00:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:01.842 true 00:14:01.842 18:00:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:14:01.842 18:00:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.408 18:00:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.975 18:00:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:14:02.975 18:00:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:03.540 true 00:14:03.540 18:00:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:14:03.540 18:00:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.475 18:00:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.991 18:00:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:14:04.991 18:00:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:05.250 true 00:14:05.509 18:00:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:14:05.509 18:00:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.076 18:00:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.333 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.333 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.333 18:00:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:14:06.333 18:00:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:06.590 true 00:14:06.590 18:00:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:14:06.590 18:00:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.523 18:00:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.523 18:00:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:14:07.523 18:00:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:08.090 true 00:14:08.090 18:00:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:14:08.090 18:00:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.347 18:00:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.607 18:00:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:14:08.607 18:00:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:08.866 true 00:14:08.866 18:00:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:14:08.866 18:00:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.436 18:00:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.005 18:00:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:14:10.005 18:00:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:10.618 true 00:14:10.618 18:00:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:14:10.619 18:00:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.552 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.552 18:01:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.552 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.808 18:01:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:14:11.808 18:01:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:12.067 true 00:14:12.326 18:01:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:14:12.326 18:01:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.583 18:01:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.841 18:01:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:14:12.841 18:01:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:13.098 true 00:14:13.098 18:01:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:14:13.098 18:01:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.356 18:01:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.923 18:01:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:14:13.923 18:01:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:13.923 true 00:14:13.923 18:01:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:14:13.923 18:01:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.856 18:01:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:14.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.119 18:01:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:14:15.119 18:01:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:15.376 true 00:14:15.376 18:01:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:14:15.376 18:01:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.638 18:01:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.896 18:01:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:14:15.896 18:01:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:16.464 true 00:14:16.464 18:01:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:14:16.464 18:01:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.032 18:01:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.309 [2024-04-15 18:01:06.043915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.044001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.044084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.044144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.044200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.044262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.044320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.044400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.044458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.044513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.044574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.044635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.044690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.044748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.044814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.044873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.044931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.044991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.045070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.045129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.045186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.045245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.045309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.045382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.045438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.045495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.045548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.045597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.045654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.045706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.045771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.045828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.045885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.045941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.046003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.046088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.046139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.046201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.046261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.046317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.046389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.309 [2024-04-15 18:01:06.046442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.046501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.046556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.046616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.046666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.046720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.046778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.046833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.046888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.046945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.047000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.047080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.047144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.047202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.047258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.047317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.047391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.047462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.047520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.047578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.047635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.047693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.047754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.047959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.048433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.048502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.048559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.048630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.048695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.048763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.048833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.048894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.048955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.049014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.049109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.049177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.049235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.049288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.049347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.049415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.049486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.049549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.049612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.049675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.049733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.049789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.049840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.049895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.049952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.050014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.050098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.050157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.050224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.050287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.050364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.050423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.050480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.050537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.050594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.050654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.050713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.050771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.050829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.050891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.050948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.051010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.051103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.051174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.051242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.051300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.051378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.051440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.051498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.051561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.051617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.051678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.051739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.051801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.051860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.051923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.051994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.052080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.052151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.052216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.052275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.052338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.052421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.052986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.053055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.053119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.053178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.053238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.053300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.053360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.053433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.053495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.053554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.053617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.053666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.310 [2024-04-15 18:01:06.053725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.053787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.053852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.053918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.053969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.054024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.054104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.054163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.054226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.054289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.054363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.054421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.054476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.054540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.054590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.054649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.054707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.054767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.054826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.054891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.054951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.055013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.055093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.055158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.055217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.055278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.055345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.055422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.055483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.055544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.055603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.055661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.055718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.055782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.055845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.055909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.055963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.056024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.056119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.056182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.056243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.056301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.056360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.056432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.056491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.056549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.056607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.056668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.056730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.056785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.056843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.056900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.057491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.057557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.057621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.057679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.057741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.057797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.057857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.057913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.057970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.058031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.058112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.058180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.058242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.058306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.058383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.058443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.058506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.058566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.058634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.058698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.058760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.058822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.058879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.058937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.058996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.059080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.059143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.059204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.059268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.059326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.059400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.059462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.059522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.059582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.059642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.059712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.059775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.059835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.059897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.059954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.060014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.060096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.060159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.060218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.060279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.311 [2024-04-15 18:01:06.060337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.060412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.060473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.060532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.060593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.060647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.060709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.060771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.060829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.060884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.060946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.061006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.061088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.061150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.061202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.061266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.061329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.061424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:17.312 [2024-04-15 18:01:06.062388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.062475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.062538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.062599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.062670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.062733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.062796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.062857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.062915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.062974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.063032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.063121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.063188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.063243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.063303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.063379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.063440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.063506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.063573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.063634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.063692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.063741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.063804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.063863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.063927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.063984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.064069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.064132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.064194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.064257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.064317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.064389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.064448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.064506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.064565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.064645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.064711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.064780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.064848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.064929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 18:01:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:14:17.312 [2024-04-15 18:01:06.064994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.065080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 18:01:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:17.312 [2024-04-15 18:01:06.065144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.065207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.065276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.065340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.065417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.065476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.065535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.065596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.065660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.065723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.065799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.065856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.065914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.065975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.066034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.066124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.066187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.066255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.066323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.066406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.066481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.066544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.066738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.066799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.066854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.066910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.066972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.067028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.067113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.067174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.067232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.067283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.067361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.067435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.312 [2024-04-15 18:01:06.067494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.067554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.067617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.067676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.067732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.068249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.068311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.068393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.068476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.068537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.068593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.068653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.068708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.068767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.068824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.068881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.068942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.069013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.069103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.069171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.069235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.069297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.069383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.069432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.069490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.069545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.069606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.069683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.069741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.069803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.069857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.069911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.069964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.070019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.070100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.070168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.070228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.070290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.070362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.070424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.070482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.070540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.070598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.070657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.070714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.070772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.070835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.070892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.070948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.071003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.071088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.071154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.071214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.071276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.071337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.071428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.071488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.071558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.071620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.071675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.071732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.071785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.071846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.071911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.071965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.072021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.072101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.072163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.072535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.072592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.072646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.072700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.072749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.072804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.072863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.072927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.072986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.073074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.073142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.073202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.073266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.073331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.073428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.073492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.073553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.073608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.073664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.073723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.073782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.073843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.073909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.073969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.074029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.074119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.074182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.074244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.074305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.313 [2024-04-15 18:01:06.074379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.074459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.074508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.074564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.074620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.074677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.074736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.074792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.074855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.074917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.074978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.075033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.075114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.075175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.075237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.075298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.075370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.075447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.075506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.075559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.075616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.075668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.075726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.075783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.075843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.075900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.075965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.076024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.076118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.076186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.076246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.076309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.076388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.076464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.076521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.077031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.077117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.077181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.077241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.077305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.077395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.077466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.077518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.077576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.077633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.077688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.077743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.077800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.077855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.077910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.077963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.078018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.078095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.078156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.078221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.078284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.078360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.078419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.078488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.078550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.078609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.078672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.078728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.078783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.078843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.078901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.078955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.079002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.079072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.079122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.079170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.079229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.079288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.079360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.079417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.079476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.079532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.079591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.079652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.079708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.314 [2024-04-15 18:01:06.079768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.079824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.079881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.079941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.079997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.080080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.080146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.080208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.080273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.080357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.080433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.080495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.080554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.080613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.080680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.080745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.080802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.080866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.081447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.081517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.081575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.081633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.081708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.081765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.081818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.081867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.081926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.081989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.082076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.082140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.082195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.082258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.082325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.082392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.082461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.082520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.082575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.082636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.082690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.082755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.082817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.082883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.082945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.083002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.083089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.083152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.083214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.083278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.083355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.083419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.083476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.083534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.083594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.083650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.083710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.083767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.083824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.083881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.083941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.083997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.084080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.084141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.084195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.084255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.084318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.084389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.084448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.084509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.084573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.084632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.084695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.084744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.084797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.084852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.084904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.084964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.085018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.085109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.085171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.085228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.085293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.085366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.085882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.085944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.086000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.086081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.086149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.086210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.086271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.086332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.086413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.086477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.086546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.086610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.086669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.086724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.315 [2024-04-15 18:01:06.086782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.086838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.086910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.086970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.087022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.087108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.087174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.087240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.087300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.087374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.087452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.087512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.087560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.087617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.087679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.087738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.087802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.087857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.087912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.087960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.088018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.088107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.088170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.088228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.088284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.088342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.088414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.088484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.088541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.088592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.088649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.088704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.088758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.088812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.088870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.088930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.088987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.089066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.089134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.089197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.089261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.089323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.089397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.089461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.089523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.089583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.089648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.089706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.089764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.090316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.090406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.090461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.090516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.090574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.090633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.090692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.090745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.090799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.090852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.090910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.090966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.091019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.091106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.091171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.091237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.091300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.091376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.091438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.091489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.091543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.091596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.091658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.091713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.091768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.091831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.091889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.091942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.092000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.092087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.092158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.092221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.092286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.092344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.092433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.092491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.092543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.092603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.092661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.092722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.092780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.092839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.092896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.092957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.093015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.093116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.093186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.093248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.093310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.316 [2024-04-15 18:01:06.093388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.093464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.093525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.093588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.093641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.093700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.093762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.093819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.093879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.093939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.093996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.094081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.094146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.094213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.094283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.094845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.094898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.094954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.095015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.095099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.095161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.095221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.095283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.095362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.095434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.095491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.095546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.095604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.095659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.095716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.095769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.095828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.095881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.095928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.095979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.096034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.096114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.096173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.096234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.096305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.096388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.096463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.096522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.096579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.096635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.096693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.096749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.096814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.096874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.096934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.096991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.097080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.097148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.097209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.097265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.097322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.097398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.097456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.097514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.097574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.097634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.097683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.097736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.097788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.097849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.097905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.097960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.098012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.098108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.098170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.098232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.098291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.098353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.098431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.098506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.098565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.098620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.098676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.099226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.099293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.099368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.099423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.099478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.099538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.099599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.099666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.099727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.099785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.099842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.099900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.099955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.100015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.100100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.100164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.100226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.100289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.100372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.100452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.100518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.100580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.317 [2024-04-15 18:01:06.100641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.100696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.100755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.100809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.100866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.100926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.100985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.101071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.101136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.101196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.101260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.101319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.101390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.101445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.101504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.101568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.101622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.101677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.101734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.101791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.101849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.101899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.101961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.102024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.102104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.102162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.102220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.102277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.102338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.102415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.102478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.102534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.102592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.102648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.102709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.102767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.102818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.102873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.102930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.102987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.103065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.103122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.103577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.103642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.103700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.103762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.103824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.103880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.103938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.103989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.104067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.104123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.104181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.104242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.104304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.104379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.104455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.104513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.104570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.104627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.104686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.104741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.104800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.104857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.104913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.104964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.105032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.105119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.105181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.105242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.105299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.105372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.105431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.105490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.105548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.105606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.105662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.105718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.105781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.105837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.105894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.105949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.106003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.106088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.106150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.318 [2024-04-15 18:01:06.106213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.106274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.106359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.106416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.106472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.106527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.106585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.106644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.106704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.106760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.106822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.106883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.106942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.106997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.107076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.107136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.107199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.107260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.107322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.107388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.108232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.108296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.108372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.108445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.108505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.108567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.108629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.108686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.108752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.108807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.108865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.108922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.108979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.109036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.109123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.109192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.109261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.109328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.109409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.109473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.109534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.109592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.109649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.109707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.109764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.109827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.109878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.109933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.109986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.110066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.110124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.110191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.110248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.110299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.110370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.110424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.110479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.110533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.110601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.110666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.110715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.110768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.110828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.110882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.110942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.110998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.111065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.111142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.111198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.111252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.111316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.111394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.111454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.111523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.111582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.111641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.111697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.111755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.111810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.111869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.111926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.111983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.112068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.112139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.112370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.112449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.112507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.112567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.112629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.112690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.112746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.112813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.112874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.112931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.112982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.113055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.113124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.113185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.113243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.319 [2024-04-15 18:01:06.113301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.113374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.113511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.113563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.113622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.113680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.113735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.113792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.113848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.113903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.113962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.114018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.114100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.114157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.114220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.114285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.114362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.114425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.114485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.114546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.114615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.114675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.114734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.114793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.114868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.114932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.114992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.115071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.115153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.115211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.115273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.115331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.115401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.115459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.115519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.115578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.115636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.115691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.115747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.115808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.115876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.115936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.116006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.116096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.116161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.116228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.116294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.116377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.116460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.117290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.117368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.117434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.117493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.117552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.117609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.117658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.117714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.117767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.117828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.117887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.117952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.118000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.118075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.118133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.118193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.118265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.118323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.118395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.118457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.118510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.118566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.118621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.118683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.118739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.118797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.118857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.118916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.118978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.119037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.119123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.119181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.119240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.119300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.119373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.119431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.119492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.119553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.119609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.119670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.119735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.119801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.119865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.119923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.119985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.120066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.120141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.120201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.120263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.120329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.120405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.320 [2024-04-15 18:01:06.120478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.120538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.120599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.120655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.120711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.120772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.120827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.120882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.120936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.120997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.121079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.121156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.121222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.121468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.121528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.121585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.121643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.121704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.121762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.121826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.121886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.121948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.122012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.122097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.122164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.122230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.122295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.122371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.122431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.122499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.122562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.122625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.122688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.122749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.122811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.122872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.122931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.122987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.123068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.123129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.123190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.123247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.123308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.123382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.123444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.123494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.123552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.123613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.123675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.123735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.123791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.123847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.123907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.123957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.124019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.124100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.124164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.124226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.124285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.124342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.124418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.124467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.124527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.124584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.124641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.124700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.124761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.124819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.124876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.124930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.124987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.125056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.125128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.125191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.125250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.125313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.125864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.125943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.126005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.126094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.126171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.126232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.126296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.126372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.126445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.126508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.126567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.126625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.126680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.126743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.126806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.126867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.126929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.126988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.127071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.127137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.127212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.127270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.321 [2024-04-15 18:01:06.127322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.127397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.127459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.127535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.127595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.127651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.127707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.127763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.127822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.127873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.127931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.127985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.128066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.128133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.128191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.128255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.128321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.128398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.128447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.128506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.128558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.128612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.128668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.128729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.128786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.128843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.128903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.128960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.129016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.129094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.129154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.129216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.129275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.129334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.129411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.129472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.129532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.129593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.129656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.129719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.129784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.129849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.130456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.130518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.130577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:17.322 [2024-04-15 18:01:06.130647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.130706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.130763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.130821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.130881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.130931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.130989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.131055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.131126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.131185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.131240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.131300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.131372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.131426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.131485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.131542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.131599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.131660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.131722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.131780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.131837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.131885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.131943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.132000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.132081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.132142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.132204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.132262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.132322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.132393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.132449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.132502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.132560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.132620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.132678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.132737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.132796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.132858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.132923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.132987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.133072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.133137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.133199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.133263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.133323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.133396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.133454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.133510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.133569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.133635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.133694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.133754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.133810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.322 [2024-04-15 18:01:06.133865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.133922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.133980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.134053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.134129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.134189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.134254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.134326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.134841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.134899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.134961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.135018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.135116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.135175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.135236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.135295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.135371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.135450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.135509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.135571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.135626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.135680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.135741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.135803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.135864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.135921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.135976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.136031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.136114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.136174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.136237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.136299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.136378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.136444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.136503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.136571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.136633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.136697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.136762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.136820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.136874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.136931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.136982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.137055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.137129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.137184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.137235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.137294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.137363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.137421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.137475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.137533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.137592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.137647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.137705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.137765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.137827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.137891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.137957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.138016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.138100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.138160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.138217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.138274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.138332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.138406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.138466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.138526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.138580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.138637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.138693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.138759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.139713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.139776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.139832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.139888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.139945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.140001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.140081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.140156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.140220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.140283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.140357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.140419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.140474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.323 [2024-04-15 18:01:06.140531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.140589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.140648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.140719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.140790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.140847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.140906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.140965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.141023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.141106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.141168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.141224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.141289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.141347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.141423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.141487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.141544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.141603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.141655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.141710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.141763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.141828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.141886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.141944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.142005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.142082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.142141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.142199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.142258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.142321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.142397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.142456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.142514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.142571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.142635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.142701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.142763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.142820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.142875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.142929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.142984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.143065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.143127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.143190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.143249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.143308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.143382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.143456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.143513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.143574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.143634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.143828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.143891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.143946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.144001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.144084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.144142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.144199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.144252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.144314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.144383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.144440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.144497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.144557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.144614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.144669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.144724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.145165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.145230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.145292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.145379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.145438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.145495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.145553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.145607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.145664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.145724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.145782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.145840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.145895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.145958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.146012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.146091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.146151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.146214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.146280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.146341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.146401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.146473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.146522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.146574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.146628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.146686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.146746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.146805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.146862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.146923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.146976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.147032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.324 [2024-04-15 18:01:06.147126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.147188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.147252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.147315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.147391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.147466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.147526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.147587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.147651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.147710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.147769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.147828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.147886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.147946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.148005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.148088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.148149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.148206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.148259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.148313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.148386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.148442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.148502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.148558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.148611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.148659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.148713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.148771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.148833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.148887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.148938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.148997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.149400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.149461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.149519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.149578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.149647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.149711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.149772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.149834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.149897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.149960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.150020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.150109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.150171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.150230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.150288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.150349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.150422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.150479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.150535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.150597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.150655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.150711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.150769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.150824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.150884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.150943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.151000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.151087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.151147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.151206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.151276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.151357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.151435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.151486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.151547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.151608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.151665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.151727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.151788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.151843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.151900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.151955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.152010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.152090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.152150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.152205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.152267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.152328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.152406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.152468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.152527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.152587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.152645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.152705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.152763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.152825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.152882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.152940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.152994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.153090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.153154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.153217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.153278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.153839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.153899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.153948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.325 [2024-04-15 18:01:06.154007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.154085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.154153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.154216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.154280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.154353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.154420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.154480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.154531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.154584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.154637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.154696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.154749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.154805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.154859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.154916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.154973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.155030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.155118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.155182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.155245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.155304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.155370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.155443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.155512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.155570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.155628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.155684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.155742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.155799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.155855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.155917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.155976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.156039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.156132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.156197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.156265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.156326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.156399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.156465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.156536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.156599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.156665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.156727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.156789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.156848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.156905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.156977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.157032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.157115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.157184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.157244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.157303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.157386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.157435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.157494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.157554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.157619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.157676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.157734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.157790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.158760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.158821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.158880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.158941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.158999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.159084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.159148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.159211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.159275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.159340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.159416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.159474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.159535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.159598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.159661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.159722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.159777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.159834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.159892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.159949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.160004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.160088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.160150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.160210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.160269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.160327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.160403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.160470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.160535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.160595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.160657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.160713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.160770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.160829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.160891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.160946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.161014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.326 [2024-04-15 18:01:06.161089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.161157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.161218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.161280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.161359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.161417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.161471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.161554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.161613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.161667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.161723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.161785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.161864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.161924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.161979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.162036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.162124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.162180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.162229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.162283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.162354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.162414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.162473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.162533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.162592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.162651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.162706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.162913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.162972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.163026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.163116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.163187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.163250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.163323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.163399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.163477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.163551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.163609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.163667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.163723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.163780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.163839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.163896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.164098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.164167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.164230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.164287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.164367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.164428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.164483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.164533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.164594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.164647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.164708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.164771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.164837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.164892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.164943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.165003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.165056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.165137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.165203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.165266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.165321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.165390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.165447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.165504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.165562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.165622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.165683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.165742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.165798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.165855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.165909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.165963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.166024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.166104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.166164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.166225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.166293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.166353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.166438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.166498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.166556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.166616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.166677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.166737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.327 [2024-04-15 18:01:06.166802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.166865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.166928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.166993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.167838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.167899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.167958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.168015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.168091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.168151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.168205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.168263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.168326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.168400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.168460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.168538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.168594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.168643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.168696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.168759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.168816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.168876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.168939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.168996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.169067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.169127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.169186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.169248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.169311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.169384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.169440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.169497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.169550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.169607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.169671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.169739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.169804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.169866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.169928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.169988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.170066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.170126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.170186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.170248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.170307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.170382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.170440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.170496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.170564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.170628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.170685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.170743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.170802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.170860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.170918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.170974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.171036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.171119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.171179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.171240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.171514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.171568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.171627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.171683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.171741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.171798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.171854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.171913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.171973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.172034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.172111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.172173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.172231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.172287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.172361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.172422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.172486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.172542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.172595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.172658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.172720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.172775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.172831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.172887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.172939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.172994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.173071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.173128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.173188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.173249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.173312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.173388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.173445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.173514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.173573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.173628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.173687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.173743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.328 [2024-04-15 18:01:06.173792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.173848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.173896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.173954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.174007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.174086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.174140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.174191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.174255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.174318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.174396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.174453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.174513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.174571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.174629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.174687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.174742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.174802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.174858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.174917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.174976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.175032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.175121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.175182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.175244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.175307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.175552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.175613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.175669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.175728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.175786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.175844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.175914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.176289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.176366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.176425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.176476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.176532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.176587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.176643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.176697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.176750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.176802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.176857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.176914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.176972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.177030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.177120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.177187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.177247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.177305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.177378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.177439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.177507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.177566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.177624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.177683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.177741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.177800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.177864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.177922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.177981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.178068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.178129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.178187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.178245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.178306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.178379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.178438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.178495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.178554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.178613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.178672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.178733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.178796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.178858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.178909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.178967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.179024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.179108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.179168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.179230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.179287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.179365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.179443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.179494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.179551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.179609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.179674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.179743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.180663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.180722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.180777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.180832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.180889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.180947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.181009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.181095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.181155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.181215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.329 [2024-04-15 18:01:06.181280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.181361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.181423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.181478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.181539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.181594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.181660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.181719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.181781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.181838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.181899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.181954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.182011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.182098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.182159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.182221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.182278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.182330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.182402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.182457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.182518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.182575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.182632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.182692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.182751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.182810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.182858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.182915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.182978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.183035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.183136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.183198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.183259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.183321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.183404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.183476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.183536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.183596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.183649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.183713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.183772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.183835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.183900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.183964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.184024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.184109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.184175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.184235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.184295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.184368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.184429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.184487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.184548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.184608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.184813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.184874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.184932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.184990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.185072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.185137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.185204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.185266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.185323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.185396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.185452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.185509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.185565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.185623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.185677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.185736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.185791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.185853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.185901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.185960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.186015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.186093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.186154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.186214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.186274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.186337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.186417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.186465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.186523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.186583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.186645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.186703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.186757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.186817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.186873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.186930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.186984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.187054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.187119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.187182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.187239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.187307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.187390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.330 [2024-04-15 18:01:06.187468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.187527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.187584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.187641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.187698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.187758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.187816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.187878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.187938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.187998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.188077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.188139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.188199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.188261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.188319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.188397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.188457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.188516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.188575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.188629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.189178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.189241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.189299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.189371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.189425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.189478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.189537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.189592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.189647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.189703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.189758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.189811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.189867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.189923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.189983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.190034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.190122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.190180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.190244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.190306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.190382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.190440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.190501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.190557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.190615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.190673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.190726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.190783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.190847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.190898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.190953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.191008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.191090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.191153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.191213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.191270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.191328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.191403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.191485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.191547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.191610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.191671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.191734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.191797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.191870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.191925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.191982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.192042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.192125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.192188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.192248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.192311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.192388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.192446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.192500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.192558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.192615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.192678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.331 [2024-04-15 18:01:06.192737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.192792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.192856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.192921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.192979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.193035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.193904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.193971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.194030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.194117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.194176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.194237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.194305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.194363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.194440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.194501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.194559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.194614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.194669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.194729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.194787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.194850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.194905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.194964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.195022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.195111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.195181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.195246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.195306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.195379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.195440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.195495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.195550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.195614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.195670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.195729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.195787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.195845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.195900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.195956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.196010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.196089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.196148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.196206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.196267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.196329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.196403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.196464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.196518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.196573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.196629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.196686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.196741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.196798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.196852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.196906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.196961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.197017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.197095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.197159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.197221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.197282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.197373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.197438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.197497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.197560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.197620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.197669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.197734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.197790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.197998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.198086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.198144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.198192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.198239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.198286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.198363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.198419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.198476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.198535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.198590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.332 [2024-04-15 18:01:06.198644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.198697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.198756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.198808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.198865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.199335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.199419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.199490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.199545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.199601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.199670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.199728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.199782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.199839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.199897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.199956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.200016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.200099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.200170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.200234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.200313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.200387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.200455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.200520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.200577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.200636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.200693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.200750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.200806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.200867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.200923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.200980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.201035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.201116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.201178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.201242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.201306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.201363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.201435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.201493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.201551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.201611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.201675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.201734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.201793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.201848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.201903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.201962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.202019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.202102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.202163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.202214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:17.333 [2024-04-15 18:01:06.202492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.202552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.202618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.202675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.202723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.202779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.202843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.202904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.202961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.203012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.203093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.203155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.203215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.203274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.203332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.203421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.203480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.203539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.203600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.203665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.203725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.203782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.203841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.203904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.203964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.204026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.204107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.204168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.204228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.204290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.204347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.204421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.333 [2024-04-15 18:01:06.204476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.204537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.204587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.204643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.204699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.204763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.204822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.204882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.204941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.204995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.205079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.205132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.205189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.205247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.205301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.205373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.205431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.205488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.205545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.205604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.205661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.205719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.205781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.205836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.205893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.205955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.206013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.206092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.206154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.206212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.206270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.206328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.206555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.206625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.206692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.206750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.206810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.206867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.206923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.206979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.207034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.207125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.207190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.207253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.207317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.207389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.207466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.207528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.208164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.208234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.208292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.208360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.208416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.208473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.208529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.208584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.208642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.208696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.208749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.208802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.208860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.208908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.208964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.209018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.209101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.209166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.209229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.209289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.209348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.209428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.209490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.209554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.209610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.209673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.209735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.209794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.209854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.209910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.209967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.210022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.210106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.210166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.210225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.210286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.210343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.210418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.210478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.210533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.210589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.210645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.210706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.210764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.334 [2024-04-15 18:01:06.210819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.210876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.210938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.210988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.211066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.211129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.211199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.211268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.211327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.211398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.211470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.211528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.211585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.211638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.211695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.211747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.211803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.211860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.211918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.211977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.212225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.212288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.212365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.212424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.212479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.212534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.212594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.212651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.212707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.212767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.212827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.212883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.212943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.213000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.213082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.213160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.213233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.213297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.213360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.213436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.213516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.213575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.213634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.213694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.213751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.213820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.213883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.213942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.214004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.214085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.214146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.214203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.214264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.214319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.214999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.215088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.215152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.215211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.215281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.215365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.215435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.215500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.215561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.215618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.215676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.215736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.215798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.215851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.215909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.215973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.216036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.216122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.216185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.216244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.216305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.216381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.216439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.216487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.216540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.216596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.216654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.216715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.216776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.216834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.216887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.216942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.216996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.217081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.217147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.335 [2024-04-15 18:01:06.217204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.217263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.217326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.217398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.217455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.217509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.217562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.217617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.217674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.217726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.217779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.217835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.217896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.217955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.218011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.218091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.218151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.218216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.218274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.218332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.218407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.218467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.218526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.218586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.218646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.218706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.218766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.218823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.218883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.219127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.219202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.219269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.219331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.219409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.219484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.219543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.219605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.219661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.219722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.219786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.219848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.219916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.219979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.220053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.220123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.220184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.220242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.220299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.220356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.220434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.220492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.220546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.220603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.220662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.220721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.220779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.220837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.220899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.221408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.221470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.221522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.221577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.221632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.221689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.221745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.221799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.221854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.221911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.221968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.222020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.222095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.222152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.222217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.222280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.222336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.222406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.222471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.222522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.222578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.222638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.222701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.222754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.222807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.222868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.222923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.222974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.336 [2024-04-15 18:01:06.223029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.223111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.223170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.223227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.223284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.223356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.223433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.223492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.223554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.223617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.223680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.223741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.223797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.223856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.223920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.223978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.224037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.224123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.224186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.224246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.224305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.224376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.224437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.224486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.224540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.224592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.224648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.224701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.224756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.224810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.224865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.224922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.224985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.225070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.225121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.225176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.225405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.225464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.225523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.225581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.225641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.225697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.225754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.225814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.225872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.225929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.225991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.226074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.226131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.226194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.226255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.226312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.226384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.226443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.226502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.226562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.226624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.226686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.226747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.226809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.226867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.226928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.226985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.227057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.227125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.227189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.227246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.227302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.227379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.227442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.228204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.228268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.228330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.228403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.228458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.228515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.228570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.228627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.228686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.228745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.228801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.337 [2024-04-15 18:01:06.228858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.228916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.228975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.229037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.229121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.229182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.229242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.229300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.229374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.229440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.229503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.229569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.229629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.229695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.229757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.229811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.229866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.229922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.229976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.230035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.230122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.230185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.230245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.230303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.230379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.230443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.230497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.230551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.230606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.230663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.230722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.230783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.230838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.230897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.230949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.231005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.231083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.231145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.231205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.231267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.231325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.231401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.231470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.231523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.231581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.231639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.231699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.231755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.231810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.231864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.231919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.231969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.232027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.232266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.232329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.232403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.232462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.232519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.232576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.232634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.232693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.232751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.232809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.232863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.232921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.232983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.233068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.233134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.233193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.233253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.233312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.233387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.233446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.233501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.233562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.233619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.233676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.233730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.233789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.338 [2024-04-15 18:01:06.233851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.233907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.233963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.234453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.234515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.234571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.234625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.234680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.234738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.234794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.234857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.234919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.234979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.235032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.235119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.235182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.235244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.235310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.235387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.235462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.235517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.235571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.235624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.235679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.235733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.235802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.235861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.235917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.235973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.236049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.236127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.236183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.236246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.236307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.236378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.236436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.236496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.236555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.236612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.236670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.236737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.236800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.236863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.236921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.236983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.237044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.237128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.237191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.237251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.237310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.237385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.237447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.237504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.237562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.237618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.237667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.237721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.237774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.237837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.237898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.237953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.238010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.238092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.238161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.238212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.238277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.238340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.238554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.238619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.238668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.238718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.238772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.238832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.238889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.238948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.239004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.239088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.239154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.239210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.239265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.239334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.239440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.239503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.239562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.239639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.239713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.239776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.239834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.239891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.239950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.240007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.240092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.240153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.240213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.240274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.240352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.240410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.240467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.339 [2024-04-15 18:01:06.240530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.240589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.240651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.241259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.241323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.241404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.241456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.241512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.241569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.241627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.241691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.241748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.241810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.241870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.241937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.241998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.242090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.242180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.242243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.242303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.242366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.242448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.242522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.242579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.242638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.242697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.242766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.242828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.242888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.242945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.243007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.243102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.243171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.243236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.243298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.243353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.243426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.243475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.243533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.243596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.243652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.243709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.243767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.243837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.243896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.243962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.244028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.244123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.244189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.244250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.244310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.244383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.244443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.340 [2024-04-15 18:01:06.244503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.244560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.244622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.244685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.244746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.244806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.244866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.244922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.244983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.245055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.245125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.245184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.245257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.245320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.245545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.245605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.245666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.245726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.245775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.245831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.245884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.245937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.245992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.246068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.246146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.246205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.246268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.246327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.246407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.246473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.246547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.246619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.246679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.246737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.246806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.246869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.246930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.246988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.247068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.247133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.247194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.247257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.247317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.248073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.248136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.248196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.248253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.248326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.248408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.248466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.248520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.248577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.248639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.248694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.248757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.248813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.248877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.248933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.248990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.249073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.249138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.249194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.249251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.249311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.249381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.249439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.249496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.249553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.249608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.249661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.249723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.249783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.249842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.249898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.249953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.250018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.250106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.250170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.250237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.250300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.250379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.624 [2024-04-15 18:01:06.250437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.250499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.250564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.250628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.250686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.250746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.250802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.250861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.250916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.250974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.251032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.251119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.251180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.251241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.251299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.251376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.251444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.251515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.251575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.251634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.251700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.251767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.251827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.251888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.251945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.251999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.252220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.252285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.252347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.252429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.252491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.252553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.252609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.252665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.252714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.252768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.252823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.252882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.252956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.253017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.253099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.253183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.253239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.253296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.253364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.253440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.253500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.253560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.253620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.253676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.253733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.253790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.253847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.253904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.253959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.254006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.254091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.254149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.254207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.254273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.254338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.254421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.254497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.254556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.254612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.254665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.254723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.254780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.254835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.254890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.254947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.255004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.255084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.255145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.255203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.255266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.625 [2024-04-15 18:01:06.255330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.255407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.255467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.255534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.255586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.255640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.255693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.255750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.255804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.255860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.255918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.255975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.256033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.256605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.256668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.256726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.256783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.256838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.256897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.256965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.257024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.257104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.257165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.257226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.257284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.257346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.257417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.257475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.257529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.257586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.257646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.257703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.257760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.257821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.257876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.257939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.258000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.258084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.258146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.258209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.258273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.258336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.258424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.258482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.258543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.258602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.258668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.258734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.258792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.258857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.258915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.258972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.259035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.259118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.259173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.259231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.259291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.259360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.259419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.259478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.259542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.259599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.259655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.259712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.259775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.259829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.259885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.259937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.259991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.260072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.260131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.260188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.260254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.260315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.260383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.260440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.260499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.261421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.626 [2024-04-15 18:01:06.261483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.261539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.261615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.261674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.261735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.261794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.261853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.261936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.261997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.262084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.262149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.262216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.262277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.262336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.262426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.262480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.262531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.262590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.262648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.262710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.262771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.262830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.262879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.262933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.262987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.263055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.263129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.263186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.263248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.263310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.263381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.263440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.263496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.263553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.263610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.263667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.263719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.263773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.263829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.263884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.263949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.264005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.264082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.264147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.264210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.264277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.264344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.264426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.264488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.264546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.264604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.264673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.264733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.264796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.264850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.264907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.264964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.265021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.265101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.265165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.265228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.265291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.265365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.265554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.265623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.265679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.265734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.265791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.265846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.265903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.265967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.266021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.266099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.266159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.266216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.266277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.266334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.266423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.266482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.266901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.266964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.267019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.267102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.267170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.267233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.267299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.267375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.267431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.267488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.267544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.267603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.627 [2024-04-15 18:01:06.267660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.267718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.267776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.267830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.267885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.267944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.267994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.268072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.268134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.268195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.268256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.268313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.268372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.268442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.268500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.268550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.268601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.268651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.268705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.268759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.268815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.268868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.268923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.268981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.269056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.269126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.269189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.269248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.269309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.269384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.269451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.269515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.269578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.269638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.269696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.269752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.269811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.269867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.269927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.269991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.270076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.270146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.270209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.270276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.270360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.270442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.270505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.270569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.270627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.270684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.270749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.270804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.271160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.271234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.271295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.271366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.271438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.271494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.271547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.271618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.271679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.271733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.271785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 Message suppressed 999 times: [2024-04-15 18:01:06.271843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 Read completed with error (sct=0, sc=15) 00:14:17.628 [2024-04-15 18:01:06.271904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.271964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.272022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.272114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.272179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.272242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.272301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.272363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.272442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.272499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.272555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.272614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.272673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.272734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.272791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.272851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.272905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.272972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.273029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.273110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.273170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.273228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.273282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.273340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.273412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.273476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.273535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.273584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.273640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.273699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.273758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.273819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.273876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.273935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.628 [2024-04-15 18:01:06.273992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.274071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.274135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.274197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.274258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.274323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.274405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.274469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.274530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.274585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.274640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.274698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.274756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.274823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.274891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.274952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.275014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.275905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.275967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.276028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.276110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.276171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.276233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.276288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.276348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.276422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.276480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.276529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.276585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.276641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.276699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.276758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.276816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.276881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.276941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.277003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.277079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.277154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.277222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.277286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.277368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.277445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.277497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.277549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.277607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.277664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.277718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.277782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.277839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.277893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.277947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.278001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.278086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.278148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.278215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.278274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.278338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.278412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.278471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.278532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.278586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.278645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.278703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.278760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.278820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.278875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.278931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.278992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.279072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.279139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.279201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.279262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.279324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.279428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.279489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.279549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.279606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.279671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.279733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.279787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.279843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.629 [2024-04-15 18:01:06.280056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.280137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.280196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.280256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.280307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.280378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.280436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.280501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.280558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.280614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.280670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.280726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.280774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.280825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.280885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.280946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.281004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.281173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.281234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.281293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.281362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.281422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.281479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.281532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.281589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.281647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.281703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.281765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.281822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.281891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.281950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.282009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.282091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.282158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.282218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.282275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.282323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.282385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.282442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.282501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.282558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.282612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.282668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.282727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.282782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.282837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.282896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.282955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.283015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.283098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.283167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.283231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.283296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.283375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.283449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.283509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.283567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.283627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.283684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.283743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.283804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.283868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.283932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.284759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.284825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.284885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.284941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.284997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.285081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.285144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.285200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.285257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.285322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.285395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.285450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.285509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.285568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.285627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.285688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.285746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.285804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.285862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.285926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.285989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.286080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.286143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.286200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.286259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.286320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.286395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.286458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.286522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.286583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.286645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.286706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.286767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.286826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.286891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.630 [2024-04-15 18:01:06.286949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.287007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.287092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.287167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.287230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.287299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.287377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.287441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.287505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.287567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.287623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.287683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.287738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.287800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.287864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.287923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.287975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.288049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.288118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.288182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.288243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.288302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.288359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.288431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.288487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.288540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.288592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.288660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.288717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.288913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.288969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.289031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.289117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.289180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.289236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.289294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.289367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.289427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.289484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.289540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.289598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.289656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.289713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.289771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.289827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.289884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.289948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.290011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.290094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.290154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.290213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.290277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.290338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.290414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.290463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.290520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.290583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.290646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.290702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.290759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.290819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.290878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.290936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.291726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.291791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.291848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.291917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.291976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.292052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.292128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.292191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.292253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.292312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.292388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.292437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.292497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.292556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.292611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.292666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.292723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.292778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.292835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.292897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.292954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.293011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.293088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.293154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.293214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.293274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.293342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.293415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.293484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.293553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.293616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.293675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.293741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.293801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.293867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.293923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.293979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.631 [2024-04-15 18:01:06.294049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.294124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.294187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.294252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.294316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.294382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.294443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.294522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.294595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.294657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.294715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.294778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.294837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.294895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.294954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.295012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.295097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.295161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.295224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.295291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.295389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.295461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.295535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.295585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.295645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.295698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.295754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.295981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.296051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.296135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.296215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.296287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.296346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.296396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.296470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.296543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.296603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.296656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.296709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.296766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.296842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.296905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.296986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.297048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.297115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.297175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.297233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.297291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.297362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.297436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.297499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.297574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.297631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.297688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.297744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.297803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.298292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.298356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.298415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.298475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.298538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.298594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.298646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.298707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.298792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.298850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.298930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.298995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.299080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.299140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.299191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.299259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.299317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.299394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.299453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.299507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.299578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.299658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.299714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.299776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.299833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.299893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.299951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.300005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.300077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.300135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.300193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.300250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.300305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.300365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.300425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.300481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.300546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.300619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.300675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.300739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.300796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.300850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.300907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.632 [2024-04-15 18:01:06.300979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.301044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.301119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.301183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.301249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.301319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.301380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.301458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.301517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.301593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.301652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.301711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.301767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.301822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.301881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.301941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.302000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.302065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.302129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.302191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.302248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.302461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.302543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.302598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.302655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.302726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.302784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.302843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.302900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.302961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.303019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.303093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.303157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.303218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.303273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.303333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.303414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.303487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.303563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.303617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.303674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.303733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.303789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.303846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.303904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.303966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.304024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.304090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.304146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.304213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.304266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.304315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.304383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.304434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.304514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.305306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.305375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.305468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.305527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.305584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.305643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.305702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.305765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.305827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.305889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.305953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.306014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.306081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.306142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.306201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.306258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.306316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.306377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.306432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.306510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.306584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.306644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.306702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.306759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.306820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.306878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.306938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.306997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.307083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.307148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.307201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.633 [2024-04-15 18:01:06.307269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.307332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.307410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.307469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.307526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.307584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.307673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.307735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.307792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.307847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.307909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.307968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.308025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.308094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.308169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.308229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.308288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.308348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.308410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.308476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.308549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.308609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.308676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.308737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.308796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.308859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.308931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.308993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.309053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.309124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.309183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.309247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.309313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.309554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.309617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.309678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.309734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.309791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.309851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.309911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.309969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.310022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.310085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.310145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.310203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.310266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.310325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.310382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.310467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.310534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.310591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.310651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.310710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.310767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.310823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.310880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.310942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.311002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.311084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.311137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.311210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.311268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.311842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.311901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.311960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.312018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.312086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.312145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.312203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.312262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.312322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.312394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.312470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.312528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.312591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.312649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.312707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.312783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.312849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.312911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.312976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.313050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.313117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.313175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.313232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.313290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.313352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.313410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.313483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.313539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.313598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.313671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.313728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.313789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.313849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.313904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.313978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.314051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.314119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.634 [2024-04-15 18:01:06.314180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.314243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.314293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.314347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.314404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.314463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.314521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.314601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.314664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.314724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.314790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.314845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.314903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.314964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.315023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.315107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.315169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.315229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.315288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.315351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.315426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.315498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.315556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.315615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.315690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.315756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.315819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.316057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.316127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.316186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.316249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.316310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.316372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.316431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.316490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.316562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.316622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.316701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.316764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.316825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.316887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.316950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.317024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.317122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.317188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.317258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.317317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.317380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.317457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.317517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.317576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.317650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.317698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.317751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.317807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.317870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.317929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.318012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.318084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.318145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.318214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.318989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.319055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.319126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.319187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.319242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.319296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.319355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.319414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.319469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.319525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.319586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.319663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.319740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.319798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.319858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.319916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.319978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.320051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.320141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.320209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.320272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.320328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.320388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.320464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.320541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.320601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.320659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.320718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.320776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.320853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.320912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.320971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.321034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.321145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.321203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.321259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.321320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.321397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.635 [2024-04-15 18:01:06.321459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.321535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.321592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.321649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.321707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.321759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.321813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.321887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.321946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.322005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.322070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.322125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.322183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.322241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.322301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.322357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.322414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.322470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.322543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.322596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.322666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.322717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.322774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.322829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.322887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.322961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.323187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.323254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.323319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.323383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.323444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.323516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.323582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.323638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.323695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.323765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.323837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.323898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.323962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.324023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.324092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.324151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.324208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.324268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.324326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.324399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.324476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.324532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.324591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.324648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.324705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.324777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.324833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.324892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.324953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.325472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.325557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.325613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.325671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.325729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.325788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.325858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.325915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.325987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.326042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.326106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.326165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.326225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.326286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.326359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.326420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.326484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.326563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.326624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.326689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.326747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.326821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.326877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.326938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.326997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.327082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.327152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.327216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.327271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.327347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.327423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.327483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.327546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.327607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.327679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.327751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.327834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.327897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.327957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.328017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.328085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.328145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.328216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.328274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.328323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.636 [2024-04-15 18:01:06.328392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.328451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.328533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.328596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.328651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.328707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.328768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.328842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.328909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.328960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.329016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.329101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.329175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.329234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.329295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.329351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.329440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.329496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.329565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.329773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.329835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.329908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.329963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.330033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.330099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.330156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.330214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.330277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.330361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.330416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.330482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.330558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.330620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.330683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.330743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.330808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.330886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.330946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.331016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.331108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.331169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.331230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.331298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.331356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.331414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.331486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.331543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.331598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.331658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.331736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.331796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.331858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.331915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.332527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.332589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.332659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.332716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.332772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.332832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.332915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.332976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.333052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.333118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.333169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.333225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.333285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.333343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.333417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.333476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.333532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.333604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.333678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.333734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.333783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.333856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.333915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.333973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.334030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.334102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.334166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.334230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.334292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.334349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.334402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.334460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.334530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.334588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.334643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.334704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.334761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.334819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.334900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.334962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.335020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.335089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.335151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.335217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.335280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.335343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.637 [2024-04-15 18:01:06.335408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.335472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.335536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.335604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.335683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.335744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.335815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.335875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.335932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.335990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.336046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.336110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.336173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.336231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.336290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.336363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.336443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.336503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.336715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.336791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.336839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.336892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.336948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.337000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.337078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.337163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.337215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.337275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.337334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.337408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.337480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.337534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.337610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.337678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.337736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.337797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.337870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.337929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.338000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.338086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.338155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.338215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.338278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.338334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.338390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.338447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.338506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.339314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.339381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.339456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.339513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.339588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.339649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.339707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.339762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.339817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.339873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.339956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.340020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.340083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.340142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.340201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.340258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.340316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.340378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.340438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.340494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.340548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.340612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.340682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.340754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.340814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.638 [2024-04-15 18:01:06.340869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.340928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.340986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.341070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.341130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.341193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.341251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.341314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.341398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.341471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.341526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.341583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.341639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.341696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.341771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.341831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.341890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.341955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.342020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.342111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.342179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.342243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.342307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.342368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.342445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.342507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.342565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.342641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.342700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.342758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.342816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.342877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.342930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.342988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.343069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.343133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.343192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.343247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.343306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.343559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.343636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.343695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.343758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.343814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.343863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:17.639 [2024-04-15 18:01:06.343927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.343997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.344092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.344152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.344207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.344262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.344320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.344374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.344444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.344515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.344580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.344644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.344711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.344776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.344855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.344914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.344974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.345030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.345122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.345177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.345228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.345284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.345341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.345411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.345464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.345521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.345576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.345629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.345713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.345781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.639 [2024-04-15 18:01:06.345845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.345903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.345964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.346025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.346115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.346174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.346232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.346297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.346378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.346439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.346497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.346571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.346627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.346684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.346744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.346801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.346875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.346931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.346996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.347044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.347111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.347172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.347233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.347288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.347344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.347400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.347455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.348031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.348121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.348204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.348268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.348334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.348397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.348471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.348545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.348602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.348659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.348732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.348794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.348856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.348926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.348989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.349076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.349161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.349219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.349278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.349336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.349393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.349449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.349522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.349581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.349636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.349694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.349757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.349824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.349904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.349966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.350021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.350094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.350155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.350215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.350268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.350323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.350394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.350467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.350534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.350595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.350656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.350715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.350785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.350844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.350914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.350968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.351025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.640 [2024-04-15 18:01:06.351088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.351147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.351210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.351265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.351323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.351382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.351442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.351498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.351558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.351628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.351699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.351757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.351814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.351875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.351934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.351991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.352087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.352317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.353030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.353124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.353191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.353249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.353306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.353368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.353425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.353492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.353556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.353618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.353667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.353714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.353760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.353807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.353853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.353901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.353969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.354031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.354088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.354137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.354184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.354231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.354277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.354323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.354385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.354447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.354494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.354546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.354604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.354655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.354710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.354783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.354859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.354920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.354981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.355037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.355104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.355170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.355240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.355303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.355368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.355430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.355490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.355567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.355645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.355703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.355764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.355823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.355882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.355953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.356028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.356096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.356163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.356222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.356282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.356359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.356439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.356509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.356574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.356636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.356696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.356770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.356845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.356903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.641 [2024-04-15 18:01:06.357137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.357202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.357254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.357307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.357383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.357445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.357500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.357561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.357633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.357714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.357770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.357826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.357880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.357936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.358006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.358492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.358559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.358618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.358677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.358736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.358823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.358887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.358942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.359001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.359070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.359129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.359186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.359246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.359306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.359375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.359437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.359530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.359595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.359654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.359710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.359774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.359848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.359920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.359981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.360039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.360102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.360163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.360221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.360277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.360326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.360385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.360442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.360500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.360574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.360647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.360709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.360767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.360825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.360874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.360927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.360989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.361080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.361146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.361200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.361262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.361323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.361393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.361455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.361513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.361570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.361630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.361688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.361750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.361813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.361877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.361957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.362034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.362106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.362167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.362231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.362291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.362363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.362439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.362500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.362710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.363226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.363294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.642 [2024-04-15 18:01:06.363349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.363407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.363479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.363552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.363612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.363666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.363722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.363781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.363852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.363928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.363985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.364980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.365052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.365126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.365187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.365251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.365315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.365395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.365457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.365517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.365576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.365636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.365695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.365751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.365810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.365865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.365927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.365989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.366048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.366115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.366180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.366238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.366296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.366352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.366412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.366470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.366546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.366622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.366690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.643 [2024-04-15 18:01:06.366754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.366814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.366875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.366952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.367023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.367253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.367320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.367411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.367477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.367532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.367585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.367645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.367704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.367792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.367854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.367914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.367979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.368033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.368101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.368161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.368605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.368675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.368736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.368793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.368883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.368947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.369007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.369090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.369152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.369211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.369271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.369334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.369410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.369476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.369556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.369633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.369698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.369761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.369826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.369885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.369961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.370039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.370108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.370168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.370227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.370285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.370364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.370438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.370501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.370559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.370615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.370679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.370760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.370834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.370901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.370961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.371026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.371097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.371150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.371205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.371270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.371330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.371386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.371440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.371499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.371552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.371606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.371664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.371722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.371777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.371832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.371886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.371944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.371996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.372055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.644 [2024-04-15 18:01:06.372123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.372181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.372240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.372301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.372359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.372435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.372512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.372571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.372631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.372860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.373380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.373438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.373519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.373593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.373649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.373711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.373766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.373823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.373896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.373968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.374997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.375080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.375138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.375196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.375254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.375315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.375381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.375444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.375504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.375561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.375620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.375681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.375738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.375794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.375855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.375911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.375968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.376027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.376100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.376161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.376224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.376289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.376350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.376429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.376500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.376561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.376619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.376676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.376732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.376810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.376885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.376946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.377004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.377087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.377148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.377205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.377425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.377488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.645 [2024-04-15 18:01:06.377560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.377629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.377696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.377750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.377808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.377871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.377946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.378009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.378095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.378160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.378218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.378279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.378338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.378811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.378892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.378949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.379009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.379078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.379136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.379194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.379255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.379315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.379373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.379430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.379486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.379544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.379604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.379663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.379723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.379785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.379844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.379906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.379978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.380040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.380113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.380172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.380231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.380293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.380352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.380411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.380474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.380547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.380616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.380669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.380726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.380789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.380852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.380922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.381002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.381083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.381142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.381195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.381250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.381313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.381392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.381471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.381526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.381584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.381637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.381693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.381752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.381824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.381897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.381958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.382017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.382082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.382140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.382207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.382267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.382325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.382392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.382455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.382536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.382612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.382675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.382732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.382794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.383032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.383541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.383597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.646 [2024-04-15 18:01:06.383656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.383721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.383785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.383850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.383901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.383958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.384015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.384086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.384146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.384208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.384268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.384323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.384379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.384451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.384523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.384572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.384621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.384668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.384715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.384766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.384829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.384890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.384940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.384987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.385034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.385089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.385150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.385204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.385261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.385321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.385380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.385446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.385513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.385593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.385659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.385737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.385803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.385865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.385923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.385978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.386051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.386137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.386195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.386255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.386312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.386384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.386461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.386523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.386584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.386646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.386709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.386788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.386867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.386927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.386983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.387056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.387127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.387189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.387250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.387314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.387386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.387447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.387653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.387716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.387767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.387823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.387878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.387937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.388000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.388068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.388132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.388186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.388247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.388299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.388358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.388418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.388479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.647 [2024-04-15 18:01:06.388949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.389028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.389099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.389161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.389222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.389280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.389337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.389410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.389490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.389548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.389606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.389667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.389724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.389800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.389883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.389944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.390014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.390089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.390152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.390216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.390275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.390335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.390395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.390468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.390544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.390604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.390665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.390722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.390774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.390833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.390904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.390979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.391056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.391133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.391193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.391251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.391311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.391380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.391440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.391498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.391558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.391627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.391679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.391738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.391798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.391857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.391919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.391975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.392029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.392093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.392154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.392210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.392265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.392325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.392389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.392480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.392541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.392600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.392661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.392722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.392777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.392867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.392926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.392985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.393202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.393698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.393764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.393826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.393907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.393986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.394036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.394102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.394162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.394223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.394282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.394356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.394427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.394484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.394540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.394602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.394658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.394706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.394769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.394833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.394881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.394928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.394976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.648 [2024-04-15 18:01:06.395024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.395097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.395149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.395198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.395247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.395296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.395372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.395431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.395488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.395545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.395605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.395667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.395727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.395784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.395843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.395901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.395963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.396020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.396092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.396157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.396222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.396284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.396346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.396403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.396466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.396526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.396600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.396676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.396737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.396800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.396869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.396953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.397036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.397103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.397166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.397226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.397286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.397362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.397439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.397508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.397565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.397624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.397856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.397916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.397976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.398041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.398104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.398170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.398228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.398286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.398343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.398401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.398457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.398524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.398585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.398657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.398717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.399194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.399261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.399324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.399399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.649 [2024-04-15 18:01:06.399456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.399517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.399579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.399638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.399696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.399758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.399821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.399886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.399948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.400014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.400082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.400141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.400198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.400262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.400322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.400393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.400468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.400531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.400593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.400654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.400713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.400786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.400867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.400923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.400983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.401042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.401106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.401166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.401223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.401291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.401353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.401418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.401487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.401542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.401609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.401666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.401723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.401778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.401837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.401923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.401980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.402050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.402116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.402173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.402227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.402281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.402334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.402402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.402474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.402534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.402590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.402650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.402713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.402772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.402848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.402922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.402981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.403076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.403139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.403204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.403431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.403905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.403983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.404053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.404127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.404183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.404239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.404299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.404355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.404429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.404485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.404557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.404616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.404672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.404731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.404780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.404858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.404915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.404993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.405054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.405126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.405209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.405274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.405342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.405402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.405473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.405537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.405594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.405653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.405725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.650 [2024-04-15 18:01:06.405772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.405818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.405892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.405953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.406016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.406093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.406155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.406217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.406277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.406342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.406405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.406466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.406523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.406594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.406649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.406703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.406759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.406815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.406887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.406948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.407009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.407087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.407161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.407218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.407273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.407329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.407386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.407443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.407499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.407589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.407654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.407715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.407789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.407847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.407906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.408142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.408203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.408259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.408318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.408398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.408454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.408524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.408582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.408640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.408697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.408755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.408825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.408899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.408954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.409010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.409512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.409592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.409656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.409713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.409771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.409848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.409911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.409969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.410023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.410089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.410150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.410208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.410267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.410324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.410395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.410461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.410535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.410592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.410650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.410706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.410795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.410855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.410910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.410971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.411041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.411105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.411170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.411224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.411283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.411340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.411389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.411445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.411502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.411555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.651 [2024-04-15 18:01:06.411629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.411700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.411762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.411820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.411879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.411937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.412010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.412095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.412152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.412211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.412273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.412345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.412423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.412486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.412549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.412623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.412683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.412742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.412799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.412858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.412933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.412990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.413069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.413148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.413208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.413270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.413334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.413411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.413479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.413554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.413753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.413954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.414016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.414086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.414147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.414204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.414254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.414313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.414385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.414457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.414516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:17.652 [2024-04-15 18:01:06.414579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.414642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.414692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.414760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.414817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.414896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.414952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.415013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.415080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.415138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.415195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.415251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.415299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.415365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.415423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.415499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.415557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.415633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.415701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.415760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.415819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.415879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.415939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.416014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.416094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.416168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.416228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.416296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.416374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.416440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.416498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.416557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.416619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.416679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.416737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.416809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.416884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.416940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.417005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.652 [2024-04-15 18:01:06.417100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.417163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.417220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.417282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.417368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.417442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.417512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.417572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.417629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.417685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.417745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.417821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.417879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.418749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.418844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.418905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.418963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.419025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.419091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.419152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.419214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.419275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.419338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.419402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.419485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.419569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.419628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.419686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.419745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.419806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.419876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.419945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.420005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.420085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.420155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.420216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.420274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.420331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.420396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.420459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.420521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.420599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.420659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.420716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.420768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.420821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.420875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.420932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.421002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.421098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.421178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.421243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.421302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.421367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.421441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.421495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.421569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.421627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.421691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.421752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.421812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.421883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.421943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.422012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.422080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.422156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.422207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.422262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.422319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.422391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.422447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.422519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.422576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.422629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.422683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.422739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.422791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.423031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.653 [2024-04-15 18:01:06.423104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.423167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.423229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.423288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.423344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.423418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.423490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.423554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.423615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.423673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.423731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.423806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.423869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.423928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.423985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.424043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.424124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.424662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.424724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.424785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.424859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.424931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.424989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.425072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.425139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.425197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.425262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.425322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.425398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.425457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.425514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.425571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.425641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.425699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.425757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.425814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.425871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.425943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.426004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.426071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.426137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.426191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.426251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.426309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.426365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.426428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.426487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.426546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.426605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.426676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.426737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.426807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.426873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.426930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.426991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.427049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.427128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.427189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.427251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.427312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.427370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.427446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.427526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.427593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.427656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.427736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.427794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.427850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.427924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.427983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.654 [2024-04-15 18:01:06.428039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.428106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.428168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.428225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.428284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.428357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.428417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.428489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.428552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.428609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.428666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.428882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.428944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.428998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.429082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.429160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.429227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.429291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.429351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.429426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.429504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.429582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.429632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.429692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.429750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.429805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.429874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.429926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.429983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.430064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.430125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.430182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.430237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.430296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.430353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.430433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.430502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.430558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.430617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.430676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.430739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.430810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.430866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.430923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.430980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.431055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.431130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.431200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.431257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.431315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.431374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.431450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.431506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.431558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.431612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.431690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.432232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.432292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.432355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.432414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.432471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.432543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.432616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.432673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.432733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.432790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.432845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.432914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.432974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.433031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.433117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.433177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.433234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.433294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.433353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.433432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.433521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.433585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.433643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.433697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.433766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.433821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.433889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.433944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.434001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.434080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.434142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.434202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.434259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.434316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.434376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.434436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.434507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.434580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.655 [2024-04-15 18:01:06.434640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.434696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.434755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.434829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.434907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.434969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.435024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.435106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.435174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.435235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.435292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.435347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.435403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.435472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.435536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.435592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.435645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.435722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.435780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.435841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.435902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.435958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.436030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.436111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.436173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.436233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.436461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.436525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.436580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.436652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.436710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.436766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.436824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.436881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.436938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.437009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.437092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.437155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.437220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.437282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.437345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.437420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.437504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.437568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.438364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.438433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.438492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.438560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.438634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.438690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.438750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.438816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.438877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.438951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.439029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.439095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.439150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.439204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.439260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.439320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.439381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.439446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.439495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.439547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.439621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.439691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.439748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.439804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.439860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.439915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.439972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.440043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.440123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.440184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.440248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.440317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.440380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.440446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.440504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.440564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.440620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.440674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.440727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.440786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.440851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.440913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.440967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.441028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.441120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.441184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.441242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.441313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.441388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.441444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.441516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.441589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.441649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.441708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.441766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.441820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.656 [2024-04-15 18:01:06.441899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.441959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.442016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.442099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.442159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.442216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.442280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.442337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.442558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.442633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.442694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.442744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.442803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.442860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.442932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.442989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.443070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.443154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.443218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.443280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.443358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.443413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.443470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.443546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.443607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.443669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.443725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.443784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.443854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.443914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.443971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.444028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.444112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.444175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.444244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.444308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.444378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.444436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.444508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.444564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.444622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.444678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.444752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.444810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.444874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.444931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.444989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.445083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.445152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.445219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.445282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.445345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.445416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.445476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.445532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.445608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.445665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.445723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.445784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.445839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.445899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.445979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.446037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.446105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.446157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.446211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.446271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.446327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.446382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.446438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.446498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.447362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.447426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.447504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.447583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.447639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.447705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.447760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.447825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.447897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.447963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.448026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.448114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.448180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.657 [2024-04-15 18:01:06.448238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.448298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.448354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.448411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.448477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.448525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.448598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.448658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.448714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.448767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.448826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.448902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.448959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.449014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.449094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.449150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.449205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.449264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.449322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.449397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.449458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.449544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.449604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.449662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.449716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.449781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.449842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.449916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.449974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.450034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.450116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.450181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.450241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.450299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.450365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.450438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.450493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.450550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.450619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.450683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.450743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.450805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.450867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.450941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.451017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.451084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.451153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.451214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.451271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.451330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.451402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.451620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.451683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.451742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.451819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.451883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.451945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.452012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.452093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.452159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.452219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.452277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.452334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.452391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.452466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.452528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.452588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.452643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.452816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.452879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.452943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.453021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.453115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.453179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.453238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.453297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.453366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.453428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.453503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.453581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.453656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.453716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.453776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.453833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.453892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.453944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.454013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.658 [2024-04-15 18:01:06.454100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.454175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.454233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.454290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.454349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.454406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.454467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.454516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.454593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.454666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.454729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.454786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.454845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.454902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.454970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.455041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.455107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.455165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.455222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.455276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.455338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.455409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.455484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.455541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.455603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.455655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.455715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.455794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.456605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.456672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.456727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.456800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.456872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.456933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.456993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.457079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.457139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.457196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.457257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.457314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.457386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.457436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.457491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.457556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.457603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.457649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.457696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.457741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.457797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.457875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.457936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.457994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.458053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.458123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.458182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.458243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.458303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.458361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.458420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.458481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.458555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.458615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.458675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.458735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.458790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.659 [2024-04-15 18:01:06.458863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.458926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.458985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.459042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.459113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.459180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.459245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.459310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.459374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.459438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.459498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.459575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.459636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.459696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.459758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.459818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.459879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.459939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.460011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.460330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.460424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.460484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.460541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.460590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.460650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.460705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.460781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.460856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.460911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.460970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.461031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.461121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.461188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.461251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.461314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.461390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.461453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.461514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.461572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.461633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.461697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.461764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.461828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.461891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.461949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.462025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.462103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.462164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.462223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.462280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.462335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.462393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.462470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.462528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.462583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.462638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.462695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.462769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.462828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.462887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.462947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.463002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.463071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.463150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.463213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.463264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.463323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.463395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.463454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.463516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.463587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.463657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.463718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.463776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.463826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.463884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.463963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.464022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.464115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.464179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.464234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.464292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.464348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.464591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.464655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.464711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.464769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.464847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.464912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.464989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.465447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.465530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.465594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.465661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.465723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.465782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.465850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.465908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.465964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.660 [2024-04-15 18:01:06.466049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.466119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.466180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.466240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 true 00:14:17.661 [2024-04-15 18:01:06.466299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.466355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.466411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.466467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.466522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.466582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.466645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.466723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.466780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.466839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.466911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.466970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.467028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.467092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.467158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.467220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.467278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.467339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.467414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.467491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.467552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.467616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.467674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.467734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.467813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.467874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.467933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.468007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.468078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.468140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.468200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.468267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.468336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.468415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.468479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.468543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.468600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.468660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.468740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.468804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.468865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.468926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.468988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.469589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.469660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.469728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.469804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.469879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.469939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.469997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.470084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.470151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.470211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.470279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.470337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.470403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.470479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.470544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.470608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.470666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.470729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.470806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.470877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.470939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.471010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.471085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.471149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.471210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.471269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.471330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.471407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.471492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.471555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.471614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.471674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.471735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.471809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.471866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.471926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.471983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.472032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.472119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.472184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.472249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.472316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.472379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.472434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.472510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.472566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.472624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.472684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.472757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.472822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.472881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.472941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.472999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.473067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.473130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.473189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.473251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.473310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.473372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.473453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.473512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.661 [2024-04-15 18:01:06.473578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.473645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.473704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.473927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.473989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.474050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.474115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.474175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.474237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.474291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.475138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.475207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.475269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.475331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.475404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.475478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.475537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.475595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.475655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.475714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.475772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.475852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.475911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.475969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.476044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.476123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.476188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.476253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.476306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.476378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.476442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.476498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.476569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.476635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.476698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.476767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.476828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.476890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.476946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.477002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.477069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.477128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.477192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.477251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.477314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.477375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.477431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.477503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.477578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.477639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.477700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.477762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.477828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.477913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.477977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.478056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.478139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.478208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.478270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.478328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.478390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.478463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.478522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.478580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.478640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.478707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.478765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.478827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.478907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.478969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.479037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.479108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.479173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.479235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.479468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.479531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.479604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.479668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.479732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.479798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.479859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.479937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.479995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.480219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.480276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.480339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.480402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.480480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.662 [2024-04-15 18:01:06.480545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.480603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.480658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.480723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.480791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.480857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.480916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.480981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.481040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.481115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.481190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.481259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.481321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.481382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.481433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.481491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.481568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.481626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.481693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.481768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.481824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.481878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.481939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.481999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.482070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.482132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.482195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.482256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.482317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.482378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.482439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.482503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.482564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.482625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.482696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 18:01:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:14:17.663 [2024-04-15 18:01:06.482760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.482828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.482911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 18:01:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.663 [2024-04-15 18:01:06.482974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.483034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.483100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.483167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.483230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.483292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.483366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.483429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.483489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.483558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.483617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.483691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.484269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.484335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.484411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.484465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.484521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.484592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.484668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.484728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.484789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.484847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.484903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.484973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:17.663 [2024-04-15 18:01:06.485044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.485135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.485202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.485263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.485331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.485407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.485468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.485530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.485586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.485636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.485708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.485784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.485841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.485898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.485954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.486004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.486089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.486153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.486214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.486275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.486335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.486406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.486465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.486525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.486581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.486637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.486697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.486753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.486811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.486870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.486931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.486991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.487074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.487148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.487210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.487267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.487327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.487404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.487464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.663 [2024-04-15 18:01:06.487528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.487590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.487645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.487705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.487777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.487860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.487914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.487969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.488036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.488123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.488184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.488248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.488312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.488564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.488628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.488687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.488746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.488803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.488878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.488951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.489009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.489089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.489822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.489887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.489962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.490036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.490121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.490187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.490251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.490317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.490400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.490459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.490525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.490585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.490649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.490708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.490771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.490824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.490901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.490965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.491027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.491111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.491171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.491229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.491286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.491366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.491422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.491476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.491541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.491600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.491661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.491718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.491774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.491831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.491886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.491942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.491999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.492078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.492139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.492204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.492268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.492333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.492414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.492476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.492539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.492601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.492663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.492723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.492783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.492841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.492898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.492960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.493011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.493097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.493157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.493220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.493287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.493364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.493414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.664 [2024-04-15 18:01:06.493473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.493543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.493613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.493668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.493725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.493789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.493847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.494109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.494177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.494239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.494300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.494379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.494455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.494523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.494587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.494650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.494718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.494812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.494877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.494938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.495001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.495083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.495146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.495208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.495275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.495336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.495409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.495483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.495561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.495628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.495688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.495752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.495812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.495905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.495974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.496038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.496131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.496194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.496256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.496317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.496395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.496455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.496516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.496589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.496659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.496717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.496773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.496833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.496896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.496973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.497069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.497131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.497197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.497249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.497308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.497392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.497453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.497520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.497586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.497641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.497734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.497794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.497851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.497908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.497963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.498016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.498104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.498167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.498228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.498290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.499178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.499243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.499302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.499379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.499452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.499525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.499585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.499642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.499703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.499760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.499833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.499914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.499977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.500039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.500123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.500191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.500247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.500308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.500380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.500441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.500514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.500587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.500653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.500716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.500773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.500829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.500901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.500975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.665 [2024-04-15 18:01:06.501032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.501114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.501170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.501234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.501287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.501356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.501415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.501471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.501526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.501587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.501643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.501700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.501757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.501815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.501868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.501929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.501991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.502076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.502139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.502203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.502263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.502326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.502408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.502475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.502539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.502606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.502669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.502727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.502791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.502847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.502908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.502968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.503026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.503111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.503178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.503247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.503474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.503563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.503622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.503685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.503753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.503819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.503884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.503958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.504017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.504100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.504159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.504217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.504277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.504329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.504398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.504454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.504513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.505046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.505138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.505201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.505264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.505331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.505410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.505483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.505546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.505606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.505668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.505730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.505804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.505876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.505934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.505993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.506073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.506146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.506213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.506278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.506358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.506418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.506477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.506538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.506599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.506662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.506726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.506785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.506841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.506902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.506959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.507017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.507102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.507163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.507222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.507285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.507361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.507434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.507512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.507570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.507624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.507682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.507737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.507811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.507888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.507945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.666 [2024-04-15 18:01:06.508010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.508099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.508162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.508227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.508281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.508340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.508417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.508472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.508541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.508614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.508670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.508728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.508784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.508837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.508910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.508985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.509068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.509131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.509482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.509548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.509626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.509701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.509757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.509816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.509874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.509937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.510014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.510112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.510175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.510240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.510300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.510360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.510437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.510496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.510555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.510616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.510687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.510757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.510824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.510889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.510949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.511002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.511087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.511151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.511212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.511274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.511348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.511409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.511467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.511532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.511585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.511645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.511706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.511766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.511830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.511892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.511962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.512022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.512101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.512169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.512228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.512288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.512348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.512422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.512475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.512532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.512587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.512642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.512696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.512756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.512815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.512877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.512940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.513001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.513093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.513158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.513220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.513283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.513361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.513425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.513488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.513546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.514353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.514417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.514494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.514568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.514627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.514689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.514741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.514801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.514875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.514957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.515023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.515088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.515150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.515207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.515268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.515322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.515383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.515444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.515501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.515571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.515652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.515709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.515781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.515844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.515900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.515972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.516048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.516134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.516192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.516251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.516307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.667 [2024-04-15 18:01:06.516368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.516441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.516494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.516550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.516610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.516669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.516741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.516816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.516877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.516932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.516999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.517067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.517128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.517188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.517250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.517313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.517372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.517433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.517493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.517550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.517610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.517671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.517729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.517787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.517848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.517907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.517966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.518032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.518095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.518160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.518228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.518295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.518374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.518580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.518643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.518702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.518778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.518853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.518916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.518982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.519040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.519102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.519152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.519212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.519270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.519326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.519383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.519441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.519532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.520032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.520123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.520183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.520242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.520300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.520378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.520427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.520482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.520538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.520629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.520690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.520746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.520808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.520870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.520931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.521023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.521092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.521150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.521215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.521282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.521347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.521409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.521476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.521540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.521604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.521666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.521730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.521792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.521851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.521908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.521966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.522023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.522089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.522147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.522216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.522272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.522329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.522387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.522443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.522501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.522556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.522606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.522667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.522729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.522799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.522862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.522919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.522979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.523035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.523099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.668 [2024-04-15 18:01:06.523161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.523216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.523268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.523327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.523383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.523444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.523505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.523566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.523626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.523685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.523755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.523816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.523877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.523937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.524282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.524347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.524424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.524484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.524559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.524634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.524696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.524755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.524813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.524875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.524948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.525024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.525109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.525177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.525247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.525315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.525398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.525460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.525523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.525581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.525637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.525713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.525785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.525844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.525904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.525963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.526025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.526089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.526157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.526214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.526271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.526330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.526407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.526481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.526539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.526596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.526653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.526711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.526788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.526863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.526921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.526978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.527036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.527100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.527151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.527207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.527262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.527321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.527383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.527443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.527511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.527590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.527650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.527705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.527762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.527819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.527897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.527969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.528028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.528114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.528177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.528241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.528307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.529169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.529234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.529285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.529345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.529407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.529464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.529520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.529580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.529639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.529699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.529758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.529817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.529876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.529933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.529990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.530047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.530106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.530154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.530202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.530249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.530297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.530354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.530418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.530471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.530525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.530581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.530644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.530704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.530763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.530823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.530886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.530950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.531009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.669 [2024-04-15 18:01:06.531075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.531135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.531191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.531249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.531306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.531365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.531423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.531483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.531540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.531596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.531655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.531718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.531776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.531844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.531907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.531969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.532053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.532129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.532191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.532253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.532315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.532393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.532452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.532516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.532582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.532644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.532702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.532761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.532817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.532872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.532924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.533155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.533222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.533282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.533341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.533397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.533453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.533511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.533601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.533660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.533716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.533772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.533832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.533889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.533964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.534039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.534104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.534169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.534624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.534685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.534758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.534834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.534893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.534951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.535008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.535076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.535139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.535202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.535263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.535326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.535386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.535459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.535534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.535597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.535656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.535715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.535772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.535844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.535918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.535977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.536055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.536121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.536184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.536248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.536310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.536385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.536442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.536500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.536551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.536610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.536666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.536726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.536791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.536844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.536932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.536993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.537049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.537119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.537179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.537235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.537294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.537349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.537406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.537458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.537520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.537590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.537670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.537733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.537797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.537861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.537921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.537998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.538087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.538153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.538215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.538277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.538333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.538406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.538481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.538535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.538590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.538941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.539008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.539074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.539133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.539195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.539248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.539309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.539363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.539422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.539494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.539568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.670 [2024-04-15 18:01:06.539627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.539684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.539739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.539796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.539844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.539930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.539989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.540084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.540146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.540209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.540271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.540331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.540404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.540463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.540518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.540578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.540642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.540706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.540756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.540809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.540867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.540925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.540987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.541048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.541115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.541179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.541237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.541296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.541356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.541416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.541475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.541547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.541624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.541681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.541740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.541796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.541855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.541927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.542000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.542065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.542133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.542194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.542256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.542318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.542383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.542444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.542510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.542573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.542649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.542725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.542786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.542846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.542906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.543761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.543851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.543917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.543974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.544023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.544107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.544168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.544224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.544280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.544334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.544408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.544468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.544557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.544615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.544672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.544729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.544789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.544845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.544918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.544992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.545070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.545133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.545189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.545248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.545304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.545363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.545423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.545481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.545542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.545614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.545692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.545753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.545815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.671 [2024-04-15 18:01:06.545875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.545925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.545997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.546079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.546136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.546193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.546248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.546304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.546363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.546441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.546492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.546558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.546620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.546688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.546745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.546805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.546860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.546918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.546975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.547051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.547136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.547191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.547246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.547301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.547360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.547431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.547515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.547576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.547653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.547722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.547811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.548081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.548145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.548206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.548268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.548330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.548402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.548463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.548535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.548608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.548666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.548728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.548785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.548842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.548922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.548994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.549048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.549507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.549583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.549642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.549700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.549763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.549824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.549886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.549942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.549999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.550067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.550126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.550185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.550252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.550326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.550393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.550455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.550516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.550577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.550632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.550697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.550768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.550840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.550908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.550974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.551024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.551091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.551150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.672 [2024-04-15 18:01:06.551208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.551265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.551325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.551383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.551444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.551522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.551599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.551658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.551721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.551786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.551851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.551929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.552002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.552087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.552150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.552210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.552271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.552340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.552419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.552483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.552548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.552628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.552704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.552768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.552824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.552891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.552959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.553039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.553130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.553193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.553248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.553308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.553399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.553474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.553532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.553588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.553658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.553861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.553926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.553983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.554040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.554097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.554160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.554214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.554281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.554336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.554394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.554461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.554532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.554588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.554642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.554700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.554757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.554833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.554914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.554992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.555055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.555125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.555183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.555241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.555298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.555358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.555416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.555478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.555549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.555626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.555685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.555742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.555799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.555860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.555938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.556014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.556098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.556168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.556227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.556286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.556353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.556431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.556497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.556562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.556640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.556727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.556792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.556848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:17.944 [2024-04-15 18:01:06.557772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.944 [2024-04-15 18:01:06.557847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.557913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.557978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.558079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.558142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.558199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.558259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.558334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.558413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.558491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.558551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.558609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.558668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.558719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.558790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.558860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.558920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.558976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.559032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.559095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.559153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.559212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.559270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.559326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.559387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.559445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.559515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.559586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.559645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.559703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.559761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.559825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.559899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.559975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.560035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.560124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.560187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.560251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.560313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.560390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.560453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.560520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.560581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.560671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.560734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.560790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.560848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.560909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.560986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.561068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.561129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.561191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.561254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.561315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.561395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.561474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.561536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.561598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.561658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.561721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.561781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.561859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.561921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.562162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.562225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.562281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.562338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.562395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.562486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.562551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.562609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.562663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.562719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.562775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.562847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.562926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.562986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.563046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.563110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.563557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.563634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.563692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.563753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.563814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.563879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.563973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.564052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.564127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.564188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.564255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.564318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.564398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.564450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.564509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.564563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.564637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.564714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.564771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.564829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.564882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.564936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.565008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.565088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.565145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.565202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.565257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.565310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.565383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.565456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.565516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.565574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.565632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.565690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.945 [2024-04-15 18:01:06.565762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.565837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.565900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.565959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.566017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.566082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.566146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.566205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.566264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.566335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.566396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.566457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.566519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.566583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.566645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.566707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.566766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.566825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.566884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.566945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.567010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.567079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.567144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.567206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.567271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.567335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.567399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.567474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.567553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.567611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.567941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.568003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.568085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.568139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.568204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.568260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.568325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.568399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.568457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.568527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.568605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.568667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.568721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.568774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.568838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.568919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.568994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.569054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.569120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.569172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.569226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.569284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.569341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.569400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.569457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.569517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.569573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.569641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.569716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.569776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.569835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.569895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.569953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.570027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.570110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.570179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.570240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.570303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.570364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.570426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.570487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.570543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.570603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.570661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.570724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.570783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.570840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.570903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.570963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.571020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.571089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.571153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.571212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.571269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.571331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.571391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.571442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.571500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.571557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.571619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.571680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.571742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.571805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.572659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.572725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.572785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.572859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.572915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.572969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.573022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.573086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.573145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.573203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.573257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.573317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.573379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.573435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.573498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.573553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.573612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.573668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.573723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.946 [2024-04-15 18:01:06.573780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.573835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.573895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.573953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.574008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.574070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.574132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.574191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.574248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.574305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.574363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.574418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.574478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.574551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.574626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.574686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.574743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.574800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.574861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.574933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.575006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.575075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.575137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.575198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.575258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.575318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.575381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.575439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.575503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.575564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.575637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.575709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.575767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.575823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.575881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.575941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.576013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.576102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.576156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.576216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.576278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.576337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.576413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.576476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.576531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.576796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.576872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.576934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.576989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.577046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.577114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.577174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.577236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.577301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.577372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.577437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.577526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.577589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.577651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.577711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.577771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.577854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.577931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.577989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.578046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.578114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.578174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.578239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.578297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.578353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.578413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.578479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.578552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.578621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.578686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.578745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.578805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.578866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.578940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.579014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.579084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.579147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.579210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.579274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.579336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.579396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.579457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.579516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.579566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.579638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.579711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.579771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.579828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.579887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.579954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.580029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.580105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.580168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.580228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.580290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.580350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.580425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.580480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.580540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.580597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.580653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.580715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.580771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.581608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.581676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.581736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.581823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.581882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.581940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.581997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.947 [2024-04-15 18:01:06.582067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.582133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.582198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.582258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.582320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.582381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.582440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.582528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.582591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.582649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.582710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.582780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.582860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.582937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.582989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.583044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.583110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.583172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.583234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.583293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.583354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.583409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.583468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.583523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.583594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.583665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.583723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.583780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.583844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.583906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.584000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.584091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.584145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.584199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.584263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.584327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.584401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.584459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.584518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.584575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.584634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.584687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.584743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.584797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.584856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.584915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.584974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.585032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.585099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.585161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.585221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.585280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.585336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.585396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.585459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.585519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.585583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.585794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.585855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.585914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.585970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.586028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.586096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.586155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.586216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.586278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.586336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.586394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.586456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.586512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.586561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.586620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.586671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.586731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.587242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.587306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.587364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.587426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.587484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.587548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.587618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.587675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.587738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.587798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.587852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.587926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.587999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.588076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.588138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.588196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.588258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.588320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.588395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.588452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.948 [2024-04-15 18:01:06.588508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.588565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.588640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.588712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.588781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.588839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.588889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.588949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.589019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.589098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.589157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.589215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.589274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.589333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.589407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.589481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.589549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.589610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.589674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.589733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.589806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.589883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.589946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.590005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.590071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.590154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.590222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.590292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.590359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.590434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.590494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.590554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.590612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.590669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.590728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.590790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.590843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.590923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.590984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.591042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.591106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.591171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.591242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.591317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.591540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.591619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.591694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.591751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.591810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.591866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.591921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.591993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.592078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.592155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.592220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.592276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.592333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.592407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.592466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.592530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.592592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.592654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.592728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.592806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.592867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.592932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.592994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.593077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.593158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.593238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.593297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.593361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.593438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.593514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.593589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.593649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.593713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.593774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.593835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.593891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.593949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.594004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.594089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.594149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.594207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.594267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.594323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.594400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.594461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.594522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.595321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.595400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.595455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.595528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.595607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.595673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.595734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.595797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.595857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.595929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.596005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.596093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.596168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.596237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.596335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.596412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.596468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.596524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.596587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.596665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.596734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.949 [2024-04-15 18:01:06.596796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.596852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.596911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.596969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.597045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.597121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.597182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.597244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.597302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.597361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.597436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.597509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.597580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.597641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.597696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.597754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.597814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.597887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.597964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.598024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.598109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.598180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.598252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.598319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.598382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.598461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.598521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.598580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.598639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.598699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.598762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.598821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.598885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.598944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.599010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.599093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.599166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.599225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.599284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.599365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.599430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.599490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.599552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.599742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.599807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.599867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.599924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.599981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.600039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.600128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.600187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.600247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.600308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.600384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.600441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.600499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.600589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.600648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.600708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.600764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.601334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.601397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.601459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.601518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.601578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.601638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.601724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.601785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.601840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.601895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.601951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.602008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.602083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.602167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.602235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.602302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.602360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.602426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.602495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.602569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.602634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.602695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.602750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.602806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.602878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.602951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.603011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.603092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.603152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.603209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.603266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.603326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.603402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.603459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.603517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.603577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.603635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.603698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.603759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.603823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.603888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.603957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.604023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.604106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.604167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.604227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.604285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.604346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.604421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.604484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.604559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.604633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.604694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.604759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.950 [2024-04-15 18:01:06.604818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.604875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.604948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.605022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.605104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.605171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.605239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.605313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.605373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.605422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.605627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.605712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.605785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.605848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.605905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.605959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.606014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.606096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.606156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.606214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.606275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.606332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.606403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.606479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.606553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.606608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.606669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.606730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.606794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.606881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.606941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.607001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.607086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.607150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.607220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.607295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.607372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.607429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.607488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.607546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.607618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.607696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.607755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.607816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.607871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.607931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.608003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.608103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.608172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.608232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.608294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.608366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.608423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.608544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.608605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.608665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.609469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.609531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.609582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.609642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.609705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.609767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.609826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.609880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.609935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.609992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.610073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.610136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.610194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.610250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.610305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.610365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.610437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.610510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.610585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.610645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.610701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.610759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.610818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.610890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.610968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.611029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.611117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.611178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.611258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.611342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.611417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.611475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.611539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.611596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.611670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.611745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.611810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.611868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.611929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.611986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.612068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.612153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.612217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.612280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.612343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.612419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.612494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.612566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.612628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.612686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.612746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.612803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.612879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.612958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.613023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.613109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.951 [2024-04-15 18:01:06.613175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.613243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.613314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.613386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.613451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.613506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.613567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.613623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.613821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.613880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.613939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.613996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.614079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.614142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.614198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.614256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.614326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.614411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.614470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.614536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.614598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.614659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.614718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.614787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.614851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.614930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.614994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.615085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.615151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.615214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.615271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.615335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.615407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.615469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.615528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.615600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.615674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.615735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.615795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.615861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.615920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.616017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.616670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.616731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.616790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.616846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.616908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.616965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.617020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.617104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.617158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.617221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.617281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.617352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.617409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.617468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.617528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.617583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.617638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.617700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.617755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.617812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.617866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.617923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.617976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.618029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.618117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.618186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.618248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.618318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.618398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.618467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.618530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.618589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.618649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.618707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.618769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.618830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.618889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.618951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.619009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.619097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.619161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.619225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.619295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.619354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.619428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.619483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.619536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.619588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.619645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.619703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.619760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.619810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.619867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.952 [2024-04-15 18:01:06.619922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.619979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.620036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.620124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.620186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.620244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.620301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.620374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.620430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.620487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.620552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.620763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.620826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.620886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.620947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.621005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.621088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.621155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.621220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.621294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.621377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.621443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.621505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.621584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.621660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.621719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.621775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.621838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.621896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.621968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.622068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.622132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.622197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.622256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.622319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.622394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.622471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.622534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.622599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.622663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.622718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.622783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.622864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.622924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.622987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.623072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.623139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.623199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.623258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.623317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.623384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.623442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.623537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.623595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.623655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.623713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.623776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.624618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.624697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.624762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.624827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.624888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.624949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.625022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.625104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.625169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.625230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.625297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.625373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.625446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.625522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.625584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.625643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.625701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.625761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.625848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.625927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.625983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.626053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.626122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.626184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.626256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.626331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.626403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.626459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.626529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.626603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.626663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.626724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.626792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.626848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.626920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.626984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.627043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.627126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.627188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.627249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.627304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.627360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.627437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.627498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.627577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.627640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.627714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.627779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.627844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.627905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.627976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.628082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.628158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.628230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.628298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.953 [2024-04-15 18:01:06.628360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.628451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.628525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.628585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.628648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.628707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.628765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.628840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.628916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.629162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.629232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.629299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.629383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.629451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.629530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.629605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.629670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:17.954 [2024-04-15 18:01:06.629731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.629796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.629855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.629929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.630004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.630091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.630155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.630213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.630276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.630330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.630404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.630469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.630525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.630583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.630641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.630715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.630793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.630853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.630910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.630963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.631024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.631131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.631207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.631267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.631322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.631394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.632190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.632273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.632347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.632427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.632486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.632546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.632606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.632664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.632725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.632784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.632847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.632909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.632967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.633029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.633122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.633184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.633248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.633310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.633396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.633459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.633521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.633582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.633640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.633712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.633769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.633824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.633883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.633940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.633997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.634078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.634149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.634218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.634280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.634359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.634428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.634482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.634542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.634599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.634654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.634712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.634773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.634835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.634895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.634945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.634996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.635079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.635142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.635203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.635263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.635327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.635404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.635458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.635515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.635571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.635625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.635680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.635737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.635794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.635849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.635901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.635959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.636016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.636101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.636195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.636439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.636502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.636559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.636616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.636675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.636730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.954 [2024-04-15 18:01:06.636789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.636849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.636908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.636966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.637023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.637116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.637185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.637252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.637317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.637391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.637465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.637521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.637575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.637635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.637693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.637754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.637809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.637865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.637927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.637986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.638054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.638121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.638183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.638786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.638847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.638903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.638959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.639020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.639107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.639176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.639228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.639289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.639361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.639433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.639495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.639550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.639608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.639663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.639719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.639773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.639828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.639883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.639944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.640006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.640092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.640156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.640219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.640281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.640357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.640420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.640478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.640541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.640604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.640665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.640726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.640783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.640840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.640899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.640957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.641016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.641096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.641157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.641217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.641276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.641338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.641409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.641474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.641536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.641595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.641660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.641722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.641778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.641829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.641880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.641936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.641995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.642074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.642137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.642194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.642254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.642308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.642375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.642429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.642485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.642543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.642599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.642655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.642845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.642905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.642958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.643012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.643090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.643159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.643217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.643280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.643338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.643430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.643488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.643548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.643608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.643662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.643717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.643774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.643830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.643885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.643942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.644001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.644083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.644142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.644204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.644262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.644327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.644406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.955 [2024-04-15 18:01:06.644462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.644520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.644579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.644636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.644692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.644758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.644821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.644884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.645394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.645462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.645513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.645569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.645628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.645685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.645739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.645798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.645856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.645912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.645966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.646018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.646099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.646160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.646218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.646279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.646335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.646408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.646482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.646544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.646602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.646662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.646725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.646802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.646859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.646915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.646978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.647055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.647144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.647208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.647268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.647331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.647412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.647490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.647555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.647614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.647679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.647741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.647805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.647860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.647916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.647964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.648024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.648103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.648169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.648228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.648288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.648347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.648419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.648472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.648525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.648578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.648633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.648691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.648748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.648803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.648867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.648926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.648986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.649045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.649126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.649184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.649252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.649320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.649546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.649609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.649668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.649724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.649785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.649849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.649906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.649968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.650028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.650111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.650171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.650229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.650287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.650344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.650415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.650474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.650532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.650592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.650650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.650708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.650763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.650819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.650874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.650931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.956 [2024-04-15 18:01:06.650989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.651081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.651150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.651214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.651271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.652005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.652097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.652161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.652225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.652278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.652350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.652415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.652471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.652526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.652581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.652636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.652699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.652755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.652816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.652879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.652942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.653010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.653098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.653160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.653226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.653285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.653358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.653436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.653497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.653562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.653622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.653684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.653756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.653816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.653879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.653935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.653994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.654050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.654138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.654199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.654260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.654322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.654401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.654464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.654521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.654579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.654636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.654690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.654752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.654806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.654860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.654922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.654987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.655068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.655145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.655208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.655273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.655333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.655406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.655479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.655535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.655593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.655650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.655703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.655764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.655816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.655869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.655929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.655986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.656213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.656271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.656331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.656403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.656459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.656517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.656573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.656633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.656688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.656745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.656802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.656859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.656917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.656977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.657035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.657120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.657185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.657246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.657305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.657385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.657448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.657507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.657560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.657614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.657668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.657722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.657784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.657844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.657901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.657963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.658018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.658095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.658165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.658226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.658776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.658839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.658896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.658951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.659016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.659101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.659188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.659254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.659317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.957 [2024-04-15 18:01:06.659394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.659466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.659524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.659587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.659649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.659712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.659771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.659831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.659886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.659942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.659998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.660079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.660142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.660201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.660262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.660326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.660404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.660464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.660518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.660577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.660638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.660696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.660756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.660813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.660870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.660924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.660972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.661029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.661145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.661210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.661266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.661321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.661378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.661452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.661509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.661579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.661630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.661683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.661736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.661789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.661842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.661918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.661986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.662070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.662127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.662177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.662233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.662290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.662365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.662441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.662503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.662560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.662616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.662673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.662729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.662944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.663008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.663096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.663165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.663232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.663297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.663377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.663441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.663504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.663566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.663627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.663704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.663777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.663838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.663894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.663953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.664010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.664092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.664156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.664216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.664275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.664334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.664412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.664489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.664562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.664625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.664681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.664741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.664802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.665641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.665711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.665773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.665832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.665895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.665949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.666008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.666084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.666145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.666204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.666260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.666321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.666377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.666437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.666495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.666553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.666627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.666699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.666761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.666820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.666880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.666944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.667022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.667106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.667166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.667230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.667292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.667373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.667446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.667519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.667583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.667648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.958 [2024-04-15 18:01:06.667714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.667780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.667856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.667935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.667997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.668055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.668129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.668189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.668250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.668308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.668365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.668443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.668504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.668580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.668655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.668711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.668770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.668827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.668877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.668949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.669017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.669098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.669160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.669222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.669286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.669366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.669423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.669473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.669530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.669588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.669643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.669704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.669909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.669970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.670027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.670112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.670169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.670228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.670289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.670360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.670416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.670472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.670528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.670583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.670642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.670701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.670763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.670823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.670887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.670947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.671005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.671091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.671153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.671216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.671283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.671344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.671417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.671477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.671536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.671592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.671650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.671710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.671766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.671830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.671888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.671947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.672462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.672532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.672594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.672644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.672703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.672766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.672827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.672893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.672956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.673015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.673095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.673161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.673220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.673270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.673331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.673407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.673465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.673520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.673591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.673662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.673727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.673781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.673837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.673890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.673963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.674037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.674121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.674183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.674240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.674297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.674355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.674427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.674486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.674544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.674603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.674663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.959 [2024-04-15 18:01:06.674728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.674786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.674843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.674910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.674971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.675030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.675114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.675181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.675248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.675313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.675395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.675457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.675517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.675580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.675640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.675700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.675758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.675821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.675881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.675940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.676003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.676090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.676153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.676213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.676273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.676337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.676416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.676478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.676711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.676769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.676827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.676889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.676953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.677013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.677095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.677148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.677203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.677264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.677325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.677399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.677471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.677527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.677581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.677634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.677682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.677738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.677792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.677844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.677901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.677956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.678030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.678112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.678169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.678236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.678294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.678352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.678419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.679020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.679107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.679177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.679246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.679314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.679391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.679468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.679530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.679589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.679646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.679707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.679761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.679817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.679871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.679927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.679986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.680066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.680128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.680188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.680251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.680313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.680390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.680462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.680520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.680578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.680633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.680690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.680750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.680804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.680861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.680918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.680975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.681030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.681121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.681183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.960 [2024-04-15 18:01:06.681249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.681313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.681384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.681440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.681493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.681568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.681618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.681675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.681735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.681792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.681848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.681903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.681958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.682016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.682096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.682162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.682529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.682609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.682674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.682736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.682799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.682860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.682915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.682972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.683026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.683109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.683175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.683248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.683309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.683385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.683460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.683530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.683598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.683666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.683731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.683801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.683861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.683918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.683978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.684035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.684127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.684190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.684245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.684308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.684383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.684457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.684517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.684591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.684651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.684720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.684777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.684832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.684900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.684964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.685022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.685105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.685161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.685218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.685277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.685337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.685428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.685485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.685543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.685599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.685670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.685727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.685782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.685847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.685909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.685991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.686075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.686137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.686202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.686263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.686326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.686418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.686477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.686534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.686595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.686653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.686848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.686907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.686958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.687018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.687105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.687167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.687218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.687276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.687335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.687411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.687474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.687529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.688396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.688464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.688516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.688563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.688612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.688667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.688724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.688791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.688857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.688915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.688978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.689065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.689142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.689211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.689279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.689354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.689416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.689477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.689537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.689598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.689673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.689731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.689792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.961 [2024-04-15 18:01:06.689851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.689911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.689990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.690080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.690143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.690204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.690266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.690324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.690397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.690461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.690520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.690573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.690630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.690688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.690747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.690809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.690858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.690912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.690968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.691021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.691124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.691188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.691248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.691303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.691361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.691436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.691499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.691562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.691624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.691685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.691747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.691808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.691863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.691924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.691981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.692036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.692121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.692183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.692244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.692305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.692381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.692608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.692674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.692735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.692798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.693231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.693302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.693379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.693441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.693505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.693578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.693638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.693695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.693754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.693818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.693879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.693941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.693999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.694074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.694133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.694196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.694260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.694328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.694407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.694462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.694524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.694582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.694651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.694704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.694762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.694821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.694892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.694949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.695006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.695083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.695150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.695213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.695270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.695334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.695409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.695484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.695543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.695607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.695666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.695722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.695797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.695854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.695914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.695972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.696032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.696119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.696181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.696249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.696318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.696398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.696472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.696532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.696609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.696668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.696732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.696788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.696840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.696909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.696963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:17.962 [2024-04-15 18:01:06.697020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:18.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:18.895 18:01:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:18.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:18.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.154 18:01:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:14:19.154 18:01:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:19.719 true 00:14:19.719 18:01:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:14:19.719 18:01:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.285 18:01:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.543 18:01:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:14:20.543 18:01:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:20.801 true 00:14:20.801 18:01:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:14:20.801 18:01:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.734 Initializing NVMe Controllers 00:14:21.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:21.734 Controller IO queue size 128, less than required. 00:14:21.734 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:21.734 Controller IO queue size 128, less than required. 00:14:21.734 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:21.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:21.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:21.734 Initialization complete. Launching workers. 00:14:21.734 ======================================================== 00:14:21.734 Latency(us) 00:14:21.734 Device Information : IOPS MiB/s Average min max 00:14:21.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4260.74 2.08 20996.24 2532.42 1099465.85 00:14:21.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13190.56 6.44 9704.28 2857.42 447230.16 00:14:21.734 ======================================================== 00:14:21.734 Total : 17451.30 8.52 12461.21 2532.42 1099465.85 00:14:21.734 00:14:21.734 18:01:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.992 18:01:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:14:21.992 18:01:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:22.561 true 00:14:22.561 18:01:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3272708 00:14:22.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (3272708) - No such process 00:14:22.561 18:01:11 -- target/ns_hotplug_stress.sh@44 -- # wait 3272708 00:14:22.561 18:01:11 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:22.561 18:01:11 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:14:22.561 18:01:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:22.561 18:01:11 -- nvmf/common.sh@117 -- # sync 00:14:22.561 18:01:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:22.561 18:01:11 -- nvmf/common.sh@120 -- # set +e 00:14:22.561 18:01:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:22.561 18:01:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:22.561 rmmod nvme_tcp 00:14:22.561 rmmod nvme_fabrics 00:14:22.561 rmmod nvme_keyring 00:14:22.561 18:01:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:22.561 18:01:11 -- nvmf/common.sh@124 -- # set -e 00:14:22.561 18:01:11 -- nvmf/common.sh@125 -- # return 0 00:14:22.561 18:01:11 -- nvmf/common.sh@478 -- # '[' -n 3272279 ']' 00:14:22.561 18:01:11 -- nvmf/common.sh@479 -- # killprocess 3272279 00:14:22.561 18:01:11 -- common/autotest_common.sh@936 -- # '[' -z 3272279 ']' 00:14:22.561 18:01:11 -- common/autotest_common.sh@940 -- # kill -0 3272279 00:14:22.561 18:01:11 -- common/autotest_common.sh@941 -- # uname 00:14:22.561 18:01:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:22.561 18:01:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3272279 00:14:22.561 18:01:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:22.561 18:01:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:22.561 18:01:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3272279' 00:14:22.561 killing process with pid 3272279 00:14:22.561 18:01:11 -- common/autotest_common.sh@955 -- # kill 3272279 00:14:22.561 18:01:11 -- common/autotest_common.sh@960 -- # wait 3272279 00:14:22.819 18:01:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:22.819 18:01:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:22.819 18:01:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:22.819 18:01:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.819 18:01:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:22.819 18:01:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.819 18:01:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.819 18:01:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.352 18:01:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:25.352 00:14:25.352 real 0m41.272s 00:14:25.352 user 2m40.267s 00:14:25.352 sys 0m11.771s 00:14:25.352 18:01:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:25.352 18:01:13 -- common/autotest_common.sh@10 -- # set +x 00:14:25.352 ************************************ 00:14:25.352 END TEST nvmf_ns_hotplug_stress 00:14:25.352 ************************************ 00:14:25.352 18:01:13 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:25.352 18:01:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:25.352 18:01:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:25.352 18:01:13 -- common/autotest_common.sh@10 -- # set +x 00:14:25.352 ************************************ 00:14:25.352 START TEST nvmf_connect_stress 00:14:25.352 ************************************ 00:14:25.352 18:01:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:25.352 * Looking for test storage... 00:14:25.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:25.352 18:01:13 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:25.352 18:01:13 -- nvmf/common.sh@7 -- # uname -s 00:14:25.352 18:01:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.352 18:01:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.352 18:01:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.352 18:01:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.352 18:01:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.352 18:01:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.352 18:01:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.352 18:01:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.352 18:01:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.352 18:01:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.352 18:01:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:25.352 18:01:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:25.352 18:01:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.352 18:01:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.352 18:01:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.352 18:01:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.352 18:01:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:25.352 18:01:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.352 18:01:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.352 18:01:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.352 18:01:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.352 18:01:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.352 18:01:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.352 18:01:13 -- paths/export.sh@5 -- # export PATH 00:14:25.352 18:01:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.352 18:01:13 -- nvmf/common.sh@47 -- # : 0 00:14:25.352 18:01:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:25.352 18:01:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:25.352 18:01:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.353 18:01:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.353 18:01:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.353 18:01:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:25.353 18:01:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:25.353 18:01:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:25.353 18:01:13 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:25.353 18:01:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:25.353 18:01:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.353 18:01:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:25.353 18:01:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:25.353 18:01:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:25.353 18:01:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.353 18:01:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.353 18:01:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.353 18:01:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:25.353 18:01:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:25.353 18:01:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:25.353 18:01:13 -- common/autotest_common.sh@10 -- # set +x 00:14:27.881 18:01:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:27.881 18:01:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:27.881 18:01:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:27.881 18:01:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:27.881 18:01:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:27.881 18:01:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:27.881 18:01:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:27.881 18:01:16 -- nvmf/common.sh@295 -- # net_devs=() 00:14:27.881 18:01:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:27.881 18:01:16 -- nvmf/common.sh@296 -- # e810=() 00:14:27.881 18:01:16 -- nvmf/common.sh@296 -- # local -ga e810 00:14:27.881 18:01:16 -- nvmf/common.sh@297 -- # x722=() 00:14:27.881 18:01:16 -- nvmf/common.sh@297 -- # local -ga x722 00:14:27.881 18:01:16 -- nvmf/common.sh@298 -- # mlx=() 00:14:27.881 18:01:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:27.881 18:01:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.881 18:01:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.881 18:01:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.881 18:01:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.881 18:01:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.881 18:01:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.881 18:01:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.881 18:01:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.881 18:01:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.881 18:01:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.881 18:01:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.881 18:01:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:27.881 18:01:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:27.881 18:01:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:27.881 18:01:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:27.881 18:01:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:27.881 18:01:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:27.881 18:01:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.881 18:01:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:27.881 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:27.881 18:01:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.881 18:01:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.881 18:01:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.881 18:01:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.881 18:01:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.881 18:01:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.881 18:01:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:27.881 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:27.881 18:01:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.881 18:01:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.881 18:01:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.881 18:01:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.881 18:01:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.881 18:01:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:27.881 18:01:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:27.881 18:01:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:27.881 18:01:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.881 18:01:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.881 18:01:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:27.881 18:01:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.881 18:01:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:27.881 Found net devices under 0000:84:00.0: cvl_0_0 00:14:27.881 18:01:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.881 18:01:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.882 18:01:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.882 18:01:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:27.882 18:01:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.882 18:01:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:27.882 Found net devices under 0000:84:00.1: cvl_0_1 00:14:27.882 18:01:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.882 18:01:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:27.882 18:01:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:27.882 18:01:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:27.882 18:01:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:27.882 18:01:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:27.882 18:01:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.882 18:01:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.882 18:01:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.882 18:01:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:27.882 18:01:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.882 18:01:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.882 18:01:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:27.882 18:01:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.882 18:01:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.882 18:01:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:27.882 18:01:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:27.882 18:01:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.882 18:01:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.882 18:01:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.882 18:01:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.882 18:01:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:27.882 18:01:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.882 18:01:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.882 18:01:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.882 18:01:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:27.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:14:27.882 00:14:27.882 --- 10.0.0.2 ping statistics --- 00:14:27.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.882 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:14:27.882 18:01:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:14:27.882 00:14:27.882 --- 10.0.0.1 ping statistics --- 00:14:27.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.882 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:14:27.882 18:01:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.882 18:01:16 -- nvmf/common.sh@411 -- # return 0 00:14:27.882 18:01:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:27.882 18:01:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.882 18:01:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:27.882 18:01:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:27.882 18:01:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.882 18:01:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:27.882 18:01:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:27.882 18:01:16 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:27.882 18:01:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:27.882 18:01:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:27.882 18:01:16 -- common/autotest_common.sh@10 -- # set +x 00:14:27.882 18:01:16 -- nvmf/common.sh@470 -- # nvmfpid=3278528 00:14:27.882 18:01:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:27.882 18:01:16 -- nvmf/common.sh@471 -- # waitforlisten 3278528 00:14:27.882 18:01:16 -- common/autotest_common.sh@817 -- # '[' -z 3278528 ']' 00:14:27.882 18:01:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.882 18:01:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:27.882 18:01:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.882 18:01:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:27.882 18:01:16 -- common/autotest_common.sh@10 -- # set +x 00:14:27.882 [2024-04-15 18:01:16.571911] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:14:27.882 [2024-04-15 18:01:16.572015] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.882 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.882 [2024-04-15 18:01:16.656201] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:27.882 [2024-04-15 18:01:16.748368] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.882 [2024-04-15 18:01:16.748444] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.882 [2024-04-15 18:01:16.748462] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.882 [2024-04-15 18:01:16.748476] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.882 [2024-04-15 18:01:16.748489] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.882 [2024-04-15 18:01:16.748578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.882 [2024-04-15 18:01:16.748636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:27.882 [2024-04-15 18:01:16.748640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.140 18:01:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:28.140 18:01:16 -- common/autotest_common.sh@850 -- # return 0 00:14:28.140 18:01:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:28.140 18:01:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:28.140 18:01:16 -- common/autotest_common.sh@10 -- # set +x 00:14:28.140 18:01:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.140 18:01:16 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:28.140 18:01:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.140 18:01:16 -- common/autotest_common.sh@10 -- # set +x 00:14:28.140 [2024-04-15 18:01:16.899118] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.140 18:01:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.140 18:01:16 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:28.140 18:01:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.140 18:01:16 -- common/autotest_common.sh@10 -- # set +x 00:14:28.140 18:01:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.140 18:01:16 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.140 18:01:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.140 18:01:16 -- common/autotest_common.sh@10 -- # set +x 00:14:28.140 [2024-04-15 18:01:16.935258] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.140 18:01:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.140 18:01:16 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:28.140 18:01:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.140 18:01:16 -- common/autotest_common.sh@10 -- # set +x 00:14:28.140 NULL1 00:14:28.140 18:01:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.140 18:01:16 -- target/connect_stress.sh@21 -- # PERF_PID=3278555 00:14:28.140 18:01:16 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:28.140 18:01:16 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:28.140 18:01:16 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:16 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:28.140 18:01:17 -- target/connect_stress.sh@28 -- # cat 00:14:28.140 18:01:17 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:28.140 18:01:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.140 18:01:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.140 18:01:17 -- common/autotest_common.sh@10 -- # set +x 00:14:28.398 18:01:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.398 18:01:17 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:28.398 18:01:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.398 18:01:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.398 18:01:17 -- common/autotest_common.sh@10 -- # set +x 00:14:28.964 18:01:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.964 18:01:17 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:28.964 18:01:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.964 18:01:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.964 18:01:17 -- common/autotest_common.sh@10 -- # set +x 00:14:29.221 18:01:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:29.221 18:01:17 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:29.221 18:01:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.221 18:01:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:29.221 18:01:17 -- common/autotest_common.sh@10 -- # set +x 00:14:29.479 18:01:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:29.479 18:01:18 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:29.479 18:01:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.479 18:01:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:29.479 18:01:18 -- common/autotest_common.sh@10 -- # set +x 00:14:29.738 18:01:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:29.738 18:01:18 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:29.738 18:01:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.738 18:01:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:29.738 18:01:18 -- common/autotest_common.sh@10 -- # set +x 00:14:29.996 18:01:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:29.996 18:01:18 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:29.996 18:01:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.996 18:01:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:29.996 18:01:18 -- common/autotest_common.sh@10 -- # set +x 00:14:30.571 18:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:30.571 18:01:19 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:30.571 18:01:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.571 18:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:30.571 18:01:19 -- common/autotest_common.sh@10 -- # set +x 00:14:30.829 18:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:30.830 18:01:19 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:30.830 18:01:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.830 18:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:30.830 18:01:19 -- common/autotest_common.sh@10 -- # set +x 00:14:31.087 18:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:31.087 18:01:19 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:31.087 18:01:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.087 18:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:31.087 18:01:19 -- common/autotest_common.sh@10 -- # set +x 00:14:31.362 18:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:31.362 18:01:20 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:31.362 18:01:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.362 18:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:31.362 18:01:20 -- common/autotest_common.sh@10 -- # set +x 00:14:31.620 18:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:31.620 18:01:20 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:31.620 18:01:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.620 18:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:31.620 18:01:20 -- common/autotest_common.sh@10 -- # set +x 00:14:32.187 18:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.187 18:01:20 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:32.187 18:01:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.187 18:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.187 18:01:20 -- common/autotest_common.sh@10 -- # set +x 00:14:32.445 18:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.445 18:01:21 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:32.445 18:01:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.445 18:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.445 18:01:21 -- common/autotest_common.sh@10 -- # set +x 00:14:32.702 18:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.702 18:01:21 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:32.702 18:01:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.702 18:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.702 18:01:21 -- common/autotest_common.sh@10 -- # set +x 00:14:32.960 18:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.960 18:01:21 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:32.960 18:01:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.960 18:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.960 18:01:21 -- common/autotest_common.sh@10 -- # set +x 00:14:33.219 18:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:33.219 18:01:22 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:33.219 18:01:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.219 18:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:33.219 18:01:22 -- common/autotest_common.sh@10 -- # set +x 00:14:33.782 18:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:33.782 18:01:22 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:33.782 18:01:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.782 18:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:33.782 18:01:22 -- common/autotest_common.sh@10 -- # set +x 00:14:34.040 18:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.040 18:01:22 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:34.040 18:01:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.040 18:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.040 18:01:22 -- common/autotest_common.sh@10 -- # set +x 00:14:34.298 18:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.298 18:01:23 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:34.298 18:01:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.298 18:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.298 18:01:23 -- common/autotest_common.sh@10 -- # set +x 00:14:34.556 18:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.557 18:01:23 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:34.557 18:01:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.557 18:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.557 18:01:23 -- common/autotest_common.sh@10 -- # set +x 00:14:34.814 18:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.814 18:01:23 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:34.814 18:01:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.814 18:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.814 18:01:23 -- common/autotest_common.sh@10 -- # set +x 00:14:35.381 18:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:35.382 18:01:24 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:35.382 18:01:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.382 18:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:35.382 18:01:24 -- common/autotest_common.sh@10 -- # set +x 00:14:35.641 18:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:35.641 18:01:24 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:35.641 18:01:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.641 18:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:35.641 18:01:24 -- common/autotest_common.sh@10 -- # set +x 00:14:35.899 18:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:35.899 18:01:24 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:35.899 18:01:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.899 18:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:35.899 18:01:24 -- common/autotest_common.sh@10 -- # set +x 00:14:36.196 18:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.196 18:01:25 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:36.196 18:01:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.196 18:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.196 18:01:25 -- common/autotest_common.sh@10 -- # set +x 00:14:36.453 18:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.453 18:01:25 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:36.453 18:01:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.453 18:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.453 18:01:25 -- common/autotest_common.sh@10 -- # set +x 00:14:37.019 18:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.019 18:01:25 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:37.019 18:01:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.019 18:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.019 18:01:25 -- common/autotest_common.sh@10 -- # set +x 00:14:37.276 18:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.276 18:01:25 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:37.276 18:01:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.276 18:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.276 18:01:25 -- common/autotest_common.sh@10 -- # set +x 00:14:37.535 18:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.535 18:01:26 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:37.535 18:01:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.535 18:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.535 18:01:26 -- common/autotest_common.sh@10 -- # set +x 00:14:37.793 18:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.793 18:01:26 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:37.793 18:01:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.793 18:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.793 18:01:26 -- common/autotest_common.sh@10 -- # set +x 00:14:38.052 18:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.052 18:01:26 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:38.052 18:01:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.052 18:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.052 18:01:26 -- common/autotest_common.sh@10 -- # set +x 00:14:38.311 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:38.569 18:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.569 18:01:27 -- target/connect_stress.sh@34 -- # kill -0 3278555 00:14:38.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3278555) - No such process 00:14:38.569 18:01:27 -- target/connect_stress.sh@38 -- # wait 3278555 00:14:38.569 18:01:27 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:38.569 18:01:27 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:38.569 18:01:27 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:38.570 18:01:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:38.570 18:01:27 -- nvmf/common.sh@117 -- # sync 00:14:38.570 18:01:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:38.570 18:01:27 -- nvmf/common.sh@120 -- # set +e 00:14:38.570 18:01:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:38.570 18:01:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:38.570 rmmod nvme_tcp 00:14:38.570 rmmod nvme_fabrics 00:14:38.570 rmmod nvme_keyring 00:14:38.570 18:01:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:38.570 18:01:27 -- nvmf/common.sh@124 -- # set -e 00:14:38.570 18:01:27 -- nvmf/common.sh@125 -- # return 0 00:14:38.570 18:01:27 -- nvmf/common.sh@478 -- # '[' -n 3278528 ']' 00:14:38.570 18:01:27 -- nvmf/common.sh@479 -- # killprocess 3278528 00:14:38.570 18:01:27 -- common/autotest_common.sh@936 -- # '[' -z 3278528 ']' 00:14:38.570 18:01:27 -- common/autotest_common.sh@940 -- # kill -0 3278528 00:14:38.570 18:01:27 -- common/autotest_common.sh@941 -- # uname 00:14:38.570 18:01:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:38.570 18:01:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3278528 00:14:38.570 18:01:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:38.570 18:01:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:38.570 18:01:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3278528' 00:14:38.570 killing process with pid 3278528 00:14:38.570 18:01:27 -- common/autotest_common.sh@955 -- # kill 3278528 00:14:38.570 18:01:27 -- common/autotest_common.sh@960 -- # wait 3278528 00:14:38.829 18:01:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:38.829 18:01:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:38.829 18:01:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:38.829 18:01:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:38.829 18:01:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:38.829 18:01:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.829 18:01:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.829 18:01:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.734 18:01:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:40.734 00:14:40.734 real 0m15.758s 00:14:40.734 user 0m38.019s 00:14:40.734 sys 0m6.813s 00:14:40.734 18:01:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:40.734 18:01:29 -- common/autotest_common.sh@10 -- # set +x 00:14:40.734 ************************************ 00:14:40.734 END TEST nvmf_connect_stress 00:14:40.734 ************************************ 00:14:40.992 18:01:29 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:40.992 18:01:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:40.992 18:01:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:40.992 18:01:29 -- common/autotest_common.sh@10 -- # set +x 00:14:40.992 ************************************ 00:14:40.992 START TEST nvmf_fused_ordering 00:14:40.992 ************************************ 00:14:40.992 18:01:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:40.992 * Looking for test storage... 00:14:40.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:40.992 18:01:29 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:40.992 18:01:29 -- nvmf/common.sh@7 -- # uname -s 00:14:40.992 18:01:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.992 18:01:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.992 18:01:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.992 18:01:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.992 18:01:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.992 18:01:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.992 18:01:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.992 18:01:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.992 18:01:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.992 18:01:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.992 18:01:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:40.992 18:01:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:40.992 18:01:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.992 18:01:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.992 18:01:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:40.992 18:01:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.992 18:01:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:40.992 18:01:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.992 18:01:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.992 18:01:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.992 18:01:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.992 18:01:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.992 18:01:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.992 18:01:29 -- paths/export.sh@5 -- # export PATH 00:14:40.992 18:01:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.992 18:01:29 -- nvmf/common.sh@47 -- # : 0 00:14:40.992 18:01:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:40.992 18:01:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:40.992 18:01:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.992 18:01:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.992 18:01:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.992 18:01:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:40.992 18:01:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:40.992 18:01:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:40.992 18:01:29 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:40.992 18:01:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:40.992 18:01:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.992 18:01:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:40.992 18:01:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:40.992 18:01:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:40.992 18:01:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.992 18:01:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.992 18:01:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.992 18:01:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:40.992 18:01:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:40.992 18:01:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:40.992 18:01:29 -- common/autotest_common.sh@10 -- # set +x 00:14:43.528 18:01:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:43.528 18:01:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:43.528 18:01:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:43.528 18:01:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:43.528 18:01:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:43.528 18:01:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:43.528 18:01:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:43.528 18:01:32 -- nvmf/common.sh@295 -- # net_devs=() 00:14:43.528 18:01:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:43.528 18:01:32 -- nvmf/common.sh@296 -- # e810=() 00:14:43.528 18:01:32 -- nvmf/common.sh@296 -- # local -ga e810 00:14:43.528 18:01:32 -- nvmf/common.sh@297 -- # x722=() 00:14:43.528 18:01:32 -- nvmf/common.sh@297 -- # local -ga x722 00:14:43.528 18:01:32 -- nvmf/common.sh@298 -- # mlx=() 00:14:43.528 18:01:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:43.528 18:01:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.528 18:01:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.528 18:01:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.528 18:01:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.528 18:01:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.528 18:01:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.528 18:01:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.528 18:01:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.528 18:01:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.528 18:01:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.528 18:01:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.528 18:01:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:43.528 18:01:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:43.528 18:01:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:43.528 18:01:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:43.528 18:01:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:43.528 18:01:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:43.528 18:01:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.528 18:01:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:43.528 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:43.528 18:01:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.528 18:01:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.528 18:01:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.528 18:01:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.528 18:01:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.528 18:01:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.528 18:01:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:43.528 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:43.528 18:01:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.528 18:01:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.528 18:01:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.528 18:01:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.528 18:01:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.528 18:01:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:43.528 18:01:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:43.528 18:01:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:43.528 18:01:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.528 18:01:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.528 18:01:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:43.528 18:01:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.528 18:01:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:43.528 Found net devices under 0000:84:00.0: cvl_0_0 00:14:43.528 18:01:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.528 18:01:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.528 18:01:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.528 18:01:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:43.528 18:01:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.528 18:01:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:43.528 Found net devices under 0000:84:00.1: cvl_0_1 00:14:43.528 18:01:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.528 18:01:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:43.528 18:01:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:43.528 18:01:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:43.528 18:01:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:43.528 18:01:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:43.528 18:01:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.528 18:01:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.528 18:01:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.528 18:01:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:43.528 18:01:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.528 18:01:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.528 18:01:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:43.528 18:01:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.528 18:01:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.528 18:01:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:43.528 18:01:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:43.528 18:01:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.528 18:01:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.528 18:01:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.528 18:01:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.528 18:01:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:43.528 18:01:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.528 18:01:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.528 18:01:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.528 18:01:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:43.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:14:43.528 00:14:43.528 --- 10.0.0.2 ping statistics --- 00:14:43.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.528 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:14:43.528 18:01:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:14:43.528 00:14:43.528 --- 10.0.0.1 ping statistics --- 00:14:43.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.528 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:14:43.529 18:01:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.529 18:01:32 -- nvmf/common.sh@411 -- # return 0 00:14:43.529 18:01:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:43.529 18:01:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.529 18:01:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:43.529 18:01:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:43.529 18:01:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.529 18:01:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:43.529 18:01:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:43.529 18:01:32 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:43.529 18:01:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:43.529 18:01:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:43.529 18:01:32 -- common/autotest_common.sh@10 -- # set +x 00:14:43.529 18:01:32 -- nvmf/common.sh@470 -- # nvmfpid=3281846 00:14:43.529 18:01:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:43.529 18:01:32 -- nvmf/common.sh@471 -- # waitforlisten 3281846 00:14:43.529 18:01:32 -- common/autotest_common.sh@817 -- # '[' -z 3281846 ']' 00:14:43.529 18:01:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.529 18:01:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:43.529 18:01:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.529 18:01:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:43.529 18:01:32 -- common/autotest_common.sh@10 -- # set +x 00:14:43.529 [2024-04-15 18:01:32.336176] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:14:43.529 [2024-04-15 18:01:32.336260] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.529 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.529 [2024-04-15 18:01:32.414693] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.788 [2024-04-15 18:01:32.511691] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.788 [2024-04-15 18:01:32.511757] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.788 [2024-04-15 18:01:32.511773] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.788 [2024-04-15 18:01:32.511787] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.788 [2024-04-15 18:01:32.511800] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.788 [2024-04-15 18:01:32.511853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.788 18:01:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:43.788 18:01:32 -- common/autotest_common.sh@850 -- # return 0 00:14:43.788 18:01:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:43.788 18:01:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:43.788 18:01:32 -- common/autotest_common.sh@10 -- # set +x 00:14:43.788 18:01:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.788 18:01:32 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.788 18:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.788 18:01:32 -- common/autotest_common.sh@10 -- # set +x 00:14:43.788 [2024-04-15 18:01:32.665294] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.788 18:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.788 18:01:32 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:43.788 18:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.788 18:01:32 -- common/autotest_common.sh@10 -- # set +x 00:14:43.788 18:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.788 18:01:32 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.788 18:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.788 18:01:32 -- common/autotest_common.sh@10 -- # set +x 00:14:43.788 [2024-04-15 18:01:32.681511] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.788 18:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.788 18:01:32 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:43.788 18:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.788 18:01:32 -- common/autotest_common.sh@10 -- # set +x 00:14:43.788 NULL1 00:14:43.788 18:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.788 18:01:32 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:43.788 18:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.788 18:01:32 -- common/autotest_common.sh@10 -- # set +x 00:14:43.788 18:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.788 18:01:32 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:43.788 18:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.788 18:01:32 -- common/autotest_common.sh@10 -- # set +x 00:14:43.788 18:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.788 18:01:32 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:43.788 [2024-04-15 18:01:32.725776] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:14:43.788 [2024-04-15 18:01:32.725825] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281872 ] 00:14:44.046 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.613 Attached to nqn.2016-06.io.spdk:cnode1 00:14:44.613 Namespace ID: 1 size: 1GB 00:14:44.613 fused_ordering(0) 00:14:44.613 fused_ordering(1) 00:14:44.613 fused_ordering(2) 00:14:44.613 fused_ordering(3) 00:14:44.613 fused_ordering(4) 00:14:44.613 fused_ordering(5) 00:14:44.613 fused_ordering(6) 00:14:44.613 fused_ordering(7) 00:14:44.613 fused_ordering(8) 00:14:44.613 fused_ordering(9) 00:14:44.613 fused_ordering(10) 00:14:44.613 fused_ordering(11) 00:14:44.613 fused_ordering(12) 00:14:44.613 fused_ordering(13) 00:14:44.613 fused_ordering(14) 00:14:44.613 fused_ordering(15) 00:14:44.613 fused_ordering(16) 00:14:44.613 fused_ordering(17) 00:14:44.613 fused_ordering(18) 00:14:44.613 fused_ordering(19) 00:14:44.613 fused_ordering(20) 00:14:44.613 fused_ordering(21) 00:14:44.613 fused_ordering(22) 00:14:44.613 fused_ordering(23) 00:14:44.613 fused_ordering(24) 00:14:44.613 fused_ordering(25) 00:14:44.613 fused_ordering(26) 00:14:44.613 fused_ordering(27) 00:14:44.613 fused_ordering(28) 00:14:44.613 fused_ordering(29) 00:14:44.613 fused_ordering(30) 00:14:44.613 fused_ordering(31) 00:14:44.613 fused_ordering(32) 00:14:44.613 fused_ordering(33) 00:14:44.613 fused_ordering(34) 00:14:44.613 fused_ordering(35) 00:14:44.613 fused_ordering(36) 00:14:44.613 fused_ordering(37) 00:14:44.613 fused_ordering(38) 00:14:44.613 fused_ordering(39) 00:14:44.613 fused_ordering(40) 00:14:44.613 fused_ordering(41) 00:14:44.613 fused_ordering(42) 00:14:44.613 fused_ordering(43) 00:14:44.613 fused_ordering(44) 00:14:44.613 fused_ordering(45) 00:14:44.613 fused_ordering(46) 00:14:44.613 fused_ordering(47) 00:14:44.613 fused_ordering(48) 00:14:44.613 fused_ordering(49) 00:14:44.613 fused_ordering(50) 00:14:44.613 fused_ordering(51) 00:14:44.613 fused_ordering(52) 00:14:44.613 fused_ordering(53) 00:14:44.613 fused_ordering(54) 00:14:44.613 fused_ordering(55) 00:14:44.613 fused_ordering(56) 00:14:44.613 fused_ordering(57) 00:14:44.613 fused_ordering(58) 00:14:44.613 fused_ordering(59) 00:14:44.613 fused_ordering(60) 00:14:44.613 fused_ordering(61) 00:14:44.613 fused_ordering(62) 00:14:44.613 fused_ordering(63) 00:14:44.613 fused_ordering(64) 00:14:44.613 fused_ordering(65) 00:14:44.613 fused_ordering(66) 00:14:44.613 fused_ordering(67) 00:14:44.613 fused_ordering(68) 00:14:44.613 fused_ordering(69) 00:14:44.613 fused_ordering(70) 00:14:44.613 fused_ordering(71) 00:14:44.613 fused_ordering(72) 00:14:44.613 fused_ordering(73) 00:14:44.613 fused_ordering(74) 00:14:44.613 fused_ordering(75) 00:14:44.613 fused_ordering(76) 00:14:44.613 fused_ordering(77) 00:14:44.613 fused_ordering(78) 00:14:44.613 fused_ordering(79) 00:14:44.613 fused_ordering(80) 00:14:44.613 fused_ordering(81) 00:14:44.613 fused_ordering(82) 00:14:44.613 fused_ordering(83) 00:14:44.613 fused_ordering(84) 00:14:44.613 fused_ordering(85) 00:14:44.613 fused_ordering(86) 00:14:44.613 fused_ordering(87) 00:14:44.613 fused_ordering(88) 00:14:44.613 fused_ordering(89) 00:14:44.613 fused_ordering(90) 00:14:44.613 fused_ordering(91) 00:14:44.613 fused_ordering(92) 00:14:44.613 fused_ordering(93) 00:14:44.613 fused_ordering(94) 00:14:44.613 fused_ordering(95) 00:14:44.613 fused_ordering(96) 00:14:44.613 fused_ordering(97) 00:14:44.613 fused_ordering(98) 00:14:44.613 fused_ordering(99) 00:14:44.613 fused_ordering(100) 00:14:44.613 fused_ordering(101) 00:14:44.613 fused_ordering(102) 00:14:44.613 fused_ordering(103) 00:14:44.613 fused_ordering(104) 00:14:44.613 fused_ordering(105) 00:14:44.613 fused_ordering(106) 00:14:44.613 fused_ordering(107) 00:14:44.613 fused_ordering(108) 00:14:44.613 fused_ordering(109) 00:14:44.613 fused_ordering(110) 00:14:44.613 fused_ordering(111) 00:14:44.613 fused_ordering(112) 00:14:44.613 fused_ordering(113) 00:14:44.613 fused_ordering(114) 00:14:44.613 fused_ordering(115) 00:14:44.613 fused_ordering(116) 00:14:44.613 fused_ordering(117) 00:14:44.613 fused_ordering(118) 00:14:44.613 fused_ordering(119) 00:14:44.613 fused_ordering(120) 00:14:44.613 fused_ordering(121) 00:14:44.613 fused_ordering(122) 00:14:44.613 fused_ordering(123) 00:14:44.613 fused_ordering(124) 00:14:44.613 fused_ordering(125) 00:14:44.613 fused_ordering(126) 00:14:44.613 fused_ordering(127) 00:14:44.613 fused_ordering(128) 00:14:44.613 fused_ordering(129) 00:14:44.613 fused_ordering(130) 00:14:44.613 fused_ordering(131) 00:14:44.613 fused_ordering(132) 00:14:44.613 fused_ordering(133) 00:14:44.613 fused_ordering(134) 00:14:44.613 fused_ordering(135) 00:14:44.613 fused_ordering(136) 00:14:44.613 fused_ordering(137) 00:14:44.613 fused_ordering(138) 00:14:44.613 fused_ordering(139) 00:14:44.613 fused_ordering(140) 00:14:44.613 fused_ordering(141) 00:14:44.613 fused_ordering(142) 00:14:44.613 fused_ordering(143) 00:14:44.613 fused_ordering(144) 00:14:44.613 fused_ordering(145) 00:14:44.613 fused_ordering(146) 00:14:44.613 fused_ordering(147) 00:14:44.613 fused_ordering(148) 00:14:44.613 fused_ordering(149) 00:14:44.613 fused_ordering(150) 00:14:44.613 fused_ordering(151) 00:14:44.613 fused_ordering(152) 00:14:44.613 fused_ordering(153) 00:14:44.613 fused_ordering(154) 00:14:44.613 fused_ordering(155) 00:14:44.613 fused_ordering(156) 00:14:44.613 fused_ordering(157) 00:14:44.613 fused_ordering(158) 00:14:44.613 fused_ordering(159) 00:14:44.613 fused_ordering(160) 00:14:44.613 fused_ordering(161) 00:14:44.613 fused_ordering(162) 00:14:44.613 fused_ordering(163) 00:14:44.613 fused_ordering(164) 00:14:44.613 fused_ordering(165) 00:14:44.613 fused_ordering(166) 00:14:44.613 fused_ordering(167) 00:14:44.613 fused_ordering(168) 00:14:44.613 fused_ordering(169) 00:14:44.613 fused_ordering(170) 00:14:44.613 fused_ordering(171) 00:14:44.613 fused_ordering(172) 00:14:44.613 fused_ordering(173) 00:14:44.613 fused_ordering(174) 00:14:44.613 fused_ordering(175) 00:14:44.613 fused_ordering(176) 00:14:44.613 fused_ordering(177) 00:14:44.613 fused_ordering(178) 00:14:44.613 fused_ordering(179) 00:14:44.613 fused_ordering(180) 00:14:44.613 fused_ordering(181) 00:14:44.613 fused_ordering(182) 00:14:44.613 fused_ordering(183) 00:14:44.613 fused_ordering(184) 00:14:44.613 fused_ordering(185) 00:14:44.613 fused_ordering(186) 00:14:44.613 fused_ordering(187) 00:14:44.613 fused_ordering(188) 00:14:44.613 fused_ordering(189) 00:14:44.613 fused_ordering(190) 00:14:44.613 fused_ordering(191) 00:14:44.613 fused_ordering(192) 00:14:44.613 fused_ordering(193) 00:14:44.613 fused_ordering(194) 00:14:44.613 fused_ordering(195) 00:14:44.613 fused_ordering(196) 00:14:44.613 fused_ordering(197) 00:14:44.613 fused_ordering(198) 00:14:44.613 fused_ordering(199) 00:14:44.613 fused_ordering(200) 00:14:44.613 fused_ordering(201) 00:14:44.613 fused_ordering(202) 00:14:44.613 fused_ordering(203) 00:14:44.613 fused_ordering(204) 00:14:44.613 fused_ordering(205) 00:14:45.185 fused_ordering(206) 00:14:45.185 fused_ordering(207) 00:14:45.185 fused_ordering(208) 00:14:45.185 fused_ordering(209) 00:14:45.185 fused_ordering(210) 00:14:45.185 fused_ordering(211) 00:14:45.185 fused_ordering(212) 00:14:45.185 fused_ordering(213) 00:14:45.185 fused_ordering(214) 00:14:45.185 fused_ordering(215) 00:14:45.185 fused_ordering(216) 00:14:45.185 fused_ordering(217) 00:14:45.185 fused_ordering(218) 00:14:45.185 fused_ordering(219) 00:14:45.185 fused_ordering(220) 00:14:45.185 fused_ordering(221) 00:14:45.185 fused_ordering(222) 00:14:45.185 fused_ordering(223) 00:14:45.185 fused_ordering(224) 00:14:45.185 fused_ordering(225) 00:14:45.185 fused_ordering(226) 00:14:45.185 fused_ordering(227) 00:14:45.185 fused_ordering(228) 00:14:45.185 fused_ordering(229) 00:14:45.185 fused_ordering(230) 00:14:45.185 fused_ordering(231) 00:14:45.185 fused_ordering(232) 00:14:45.185 fused_ordering(233) 00:14:45.185 fused_ordering(234) 00:14:45.185 fused_ordering(235) 00:14:45.185 fused_ordering(236) 00:14:45.185 fused_ordering(237) 00:14:45.185 fused_ordering(238) 00:14:45.185 fused_ordering(239) 00:14:45.185 fused_ordering(240) 00:14:45.185 fused_ordering(241) 00:14:45.185 fused_ordering(242) 00:14:45.185 fused_ordering(243) 00:14:45.185 fused_ordering(244) 00:14:45.185 fused_ordering(245) 00:14:45.185 fused_ordering(246) 00:14:45.185 fused_ordering(247) 00:14:45.185 fused_ordering(248) 00:14:45.185 fused_ordering(249) 00:14:45.185 fused_ordering(250) 00:14:45.185 fused_ordering(251) 00:14:45.185 fused_ordering(252) 00:14:45.185 fused_ordering(253) 00:14:45.185 fused_ordering(254) 00:14:45.185 fused_ordering(255) 00:14:45.185 fused_ordering(256) 00:14:45.185 fused_ordering(257) 00:14:45.185 fused_ordering(258) 00:14:45.185 fused_ordering(259) 00:14:45.185 fused_ordering(260) 00:14:45.185 fused_ordering(261) 00:14:45.185 fused_ordering(262) 00:14:45.185 fused_ordering(263) 00:14:45.185 fused_ordering(264) 00:14:45.185 fused_ordering(265) 00:14:45.185 fused_ordering(266) 00:14:45.185 fused_ordering(267) 00:14:45.185 fused_ordering(268) 00:14:45.185 fused_ordering(269) 00:14:45.185 fused_ordering(270) 00:14:45.185 fused_ordering(271) 00:14:45.185 fused_ordering(272) 00:14:45.185 fused_ordering(273) 00:14:45.185 fused_ordering(274) 00:14:45.185 fused_ordering(275) 00:14:45.185 fused_ordering(276) 00:14:45.185 fused_ordering(277) 00:14:45.185 fused_ordering(278) 00:14:45.185 fused_ordering(279) 00:14:45.185 fused_ordering(280) 00:14:45.185 fused_ordering(281) 00:14:45.185 fused_ordering(282) 00:14:45.185 fused_ordering(283) 00:14:45.185 fused_ordering(284) 00:14:45.185 fused_ordering(285) 00:14:45.185 fused_ordering(286) 00:14:45.185 fused_ordering(287) 00:14:45.185 fused_ordering(288) 00:14:45.185 fused_ordering(289) 00:14:45.185 fused_ordering(290) 00:14:45.185 fused_ordering(291) 00:14:45.185 fused_ordering(292) 00:14:45.185 fused_ordering(293) 00:14:45.185 fused_ordering(294) 00:14:45.185 fused_ordering(295) 00:14:45.185 fused_ordering(296) 00:14:45.185 fused_ordering(297) 00:14:45.185 fused_ordering(298) 00:14:45.185 fused_ordering(299) 00:14:45.185 fused_ordering(300) 00:14:45.185 fused_ordering(301) 00:14:45.185 fused_ordering(302) 00:14:45.185 fused_ordering(303) 00:14:45.185 fused_ordering(304) 00:14:45.185 fused_ordering(305) 00:14:45.185 fused_ordering(306) 00:14:45.185 fused_ordering(307) 00:14:45.185 fused_ordering(308) 00:14:45.185 fused_ordering(309) 00:14:45.185 fused_ordering(310) 00:14:45.185 fused_ordering(311) 00:14:45.185 fused_ordering(312) 00:14:45.185 fused_ordering(313) 00:14:45.185 fused_ordering(314) 00:14:45.185 fused_ordering(315) 00:14:45.185 fused_ordering(316) 00:14:45.185 fused_ordering(317) 00:14:45.185 fused_ordering(318) 00:14:45.185 fused_ordering(319) 00:14:45.185 fused_ordering(320) 00:14:45.185 fused_ordering(321) 00:14:45.185 fused_ordering(322) 00:14:45.185 fused_ordering(323) 00:14:45.185 fused_ordering(324) 00:14:45.185 fused_ordering(325) 00:14:45.185 fused_ordering(326) 00:14:45.185 fused_ordering(327) 00:14:45.185 fused_ordering(328) 00:14:45.185 fused_ordering(329) 00:14:45.185 fused_ordering(330) 00:14:45.186 fused_ordering(331) 00:14:45.186 fused_ordering(332) 00:14:45.186 fused_ordering(333) 00:14:45.186 fused_ordering(334) 00:14:45.186 fused_ordering(335) 00:14:45.186 fused_ordering(336) 00:14:45.186 fused_ordering(337) 00:14:45.186 fused_ordering(338) 00:14:45.186 fused_ordering(339) 00:14:45.186 fused_ordering(340) 00:14:45.186 fused_ordering(341) 00:14:45.186 fused_ordering(342) 00:14:45.186 fused_ordering(343) 00:14:45.186 fused_ordering(344) 00:14:45.186 fused_ordering(345) 00:14:45.186 fused_ordering(346) 00:14:45.186 fused_ordering(347) 00:14:45.186 fused_ordering(348) 00:14:45.186 fused_ordering(349) 00:14:45.186 fused_ordering(350) 00:14:45.186 fused_ordering(351) 00:14:45.186 fused_ordering(352) 00:14:45.186 fused_ordering(353) 00:14:45.186 fused_ordering(354) 00:14:45.186 fused_ordering(355) 00:14:45.186 fused_ordering(356) 00:14:45.186 fused_ordering(357) 00:14:45.186 fused_ordering(358) 00:14:45.186 fused_ordering(359) 00:14:45.186 fused_ordering(360) 00:14:45.186 fused_ordering(361) 00:14:45.186 fused_ordering(362) 00:14:45.186 fused_ordering(363) 00:14:45.186 fused_ordering(364) 00:14:45.186 fused_ordering(365) 00:14:45.186 fused_ordering(366) 00:14:45.186 fused_ordering(367) 00:14:45.186 fused_ordering(368) 00:14:45.186 fused_ordering(369) 00:14:45.186 fused_ordering(370) 00:14:45.186 fused_ordering(371) 00:14:45.186 fused_ordering(372) 00:14:45.186 fused_ordering(373) 00:14:45.186 fused_ordering(374) 00:14:45.186 fused_ordering(375) 00:14:45.186 fused_ordering(376) 00:14:45.186 fused_ordering(377) 00:14:45.186 fused_ordering(378) 00:14:45.186 fused_ordering(379) 00:14:45.186 fused_ordering(380) 00:14:45.186 fused_ordering(381) 00:14:45.186 fused_ordering(382) 00:14:45.186 fused_ordering(383) 00:14:45.186 fused_ordering(384) 00:14:45.186 fused_ordering(385) 00:14:45.186 fused_ordering(386) 00:14:45.186 fused_ordering(387) 00:14:45.186 fused_ordering(388) 00:14:45.186 fused_ordering(389) 00:14:45.186 fused_ordering(390) 00:14:45.186 fused_ordering(391) 00:14:45.186 fused_ordering(392) 00:14:45.186 fused_ordering(393) 00:14:45.186 fused_ordering(394) 00:14:45.186 fused_ordering(395) 00:14:45.186 fused_ordering(396) 00:14:45.186 fused_ordering(397) 00:14:45.186 fused_ordering(398) 00:14:45.186 fused_ordering(399) 00:14:45.186 fused_ordering(400) 00:14:45.186 fused_ordering(401) 00:14:45.186 fused_ordering(402) 00:14:45.186 fused_ordering(403) 00:14:45.186 fused_ordering(404) 00:14:45.186 fused_ordering(405) 00:14:45.186 fused_ordering(406) 00:14:45.186 fused_ordering(407) 00:14:45.186 fused_ordering(408) 00:14:45.186 fused_ordering(409) 00:14:45.186 fused_ordering(410) 00:14:45.754 fused_ordering(411) 00:14:45.754 fused_ordering(412) 00:14:45.754 fused_ordering(413) 00:14:45.754 fused_ordering(414) 00:14:45.754 fused_ordering(415) 00:14:45.754 fused_ordering(416) 00:14:45.754 fused_ordering(417) 00:14:45.754 fused_ordering(418) 00:14:45.754 fused_ordering(419) 00:14:45.754 fused_ordering(420) 00:14:45.754 fused_ordering(421) 00:14:45.754 fused_ordering(422) 00:14:45.754 fused_ordering(423) 00:14:45.754 fused_ordering(424) 00:14:45.754 fused_ordering(425) 00:14:45.754 fused_ordering(426) 00:14:45.754 fused_ordering(427) 00:14:45.754 fused_ordering(428) 00:14:45.754 fused_ordering(429) 00:14:45.754 fused_ordering(430) 00:14:45.754 fused_ordering(431) 00:14:45.754 fused_ordering(432) 00:14:45.754 fused_ordering(433) 00:14:45.754 fused_ordering(434) 00:14:45.754 fused_ordering(435) 00:14:45.754 fused_ordering(436) 00:14:45.754 fused_ordering(437) 00:14:45.754 fused_ordering(438) 00:14:45.754 fused_ordering(439) 00:14:45.754 fused_ordering(440) 00:14:45.754 fused_ordering(441) 00:14:45.754 fused_ordering(442) 00:14:45.754 fused_ordering(443) 00:14:45.754 fused_ordering(444) 00:14:45.754 fused_ordering(445) 00:14:45.754 fused_ordering(446) 00:14:45.754 fused_ordering(447) 00:14:45.754 fused_ordering(448) 00:14:45.754 fused_ordering(449) 00:14:45.754 fused_ordering(450) 00:14:45.754 fused_ordering(451) 00:14:45.754 fused_ordering(452) 00:14:45.754 fused_ordering(453) 00:14:45.754 fused_ordering(454) 00:14:45.754 fused_ordering(455) 00:14:45.754 fused_ordering(456) 00:14:45.754 fused_ordering(457) 00:14:45.754 fused_ordering(458) 00:14:45.754 fused_ordering(459) 00:14:45.754 fused_ordering(460) 00:14:45.754 fused_ordering(461) 00:14:45.754 fused_ordering(462) 00:14:45.754 fused_ordering(463) 00:14:45.754 fused_ordering(464) 00:14:45.754 fused_ordering(465) 00:14:45.754 fused_ordering(466) 00:14:45.754 fused_ordering(467) 00:14:45.754 fused_ordering(468) 00:14:45.754 fused_ordering(469) 00:14:45.754 fused_ordering(470) 00:14:45.754 fused_ordering(471) 00:14:45.754 fused_ordering(472) 00:14:45.754 fused_ordering(473) 00:14:45.754 fused_ordering(474) 00:14:45.754 fused_ordering(475) 00:14:45.754 fused_ordering(476) 00:14:45.754 fused_ordering(477) 00:14:45.754 fused_ordering(478) 00:14:45.754 fused_ordering(479) 00:14:45.754 fused_ordering(480) 00:14:45.754 fused_ordering(481) 00:14:45.754 fused_ordering(482) 00:14:45.754 fused_ordering(483) 00:14:45.754 fused_ordering(484) 00:14:45.754 fused_ordering(485) 00:14:45.754 fused_ordering(486) 00:14:45.754 fused_ordering(487) 00:14:45.754 fused_ordering(488) 00:14:45.754 fused_ordering(489) 00:14:45.754 fused_ordering(490) 00:14:45.754 fused_ordering(491) 00:14:45.754 fused_ordering(492) 00:14:45.754 fused_ordering(493) 00:14:45.754 fused_ordering(494) 00:14:45.754 fused_ordering(495) 00:14:45.754 fused_ordering(496) 00:14:45.754 fused_ordering(497) 00:14:45.754 fused_ordering(498) 00:14:45.754 fused_ordering(499) 00:14:45.754 fused_ordering(500) 00:14:45.754 fused_ordering(501) 00:14:45.754 fused_ordering(502) 00:14:45.754 fused_ordering(503) 00:14:45.754 fused_ordering(504) 00:14:45.754 fused_ordering(505) 00:14:45.754 fused_ordering(506) 00:14:45.754 fused_ordering(507) 00:14:45.754 fused_ordering(508) 00:14:45.754 fused_ordering(509) 00:14:45.754 fused_ordering(510) 00:14:45.754 fused_ordering(511) 00:14:45.754 fused_ordering(512) 00:14:45.754 fused_ordering(513) 00:14:45.754 fused_ordering(514) 00:14:45.754 fused_ordering(515) 00:14:45.754 fused_ordering(516) 00:14:45.754 fused_ordering(517) 00:14:45.754 fused_ordering(518) 00:14:45.754 fused_ordering(519) 00:14:45.754 fused_ordering(520) 00:14:45.754 fused_ordering(521) 00:14:45.754 fused_ordering(522) 00:14:45.754 fused_ordering(523) 00:14:45.754 fused_ordering(524) 00:14:45.754 fused_ordering(525) 00:14:45.754 fused_ordering(526) 00:14:45.754 fused_ordering(527) 00:14:45.754 fused_ordering(528) 00:14:45.754 fused_ordering(529) 00:14:45.754 fused_ordering(530) 00:14:45.754 fused_ordering(531) 00:14:45.754 fused_ordering(532) 00:14:45.754 fused_ordering(533) 00:14:45.754 fused_ordering(534) 00:14:45.754 fused_ordering(535) 00:14:45.754 fused_ordering(536) 00:14:45.754 fused_ordering(537) 00:14:45.754 fused_ordering(538) 00:14:45.754 fused_ordering(539) 00:14:45.754 fused_ordering(540) 00:14:45.754 fused_ordering(541) 00:14:45.754 fused_ordering(542) 00:14:45.754 fused_ordering(543) 00:14:45.754 fused_ordering(544) 00:14:45.754 fused_ordering(545) 00:14:45.754 fused_ordering(546) 00:14:45.754 fused_ordering(547) 00:14:45.754 fused_ordering(548) 00:14:45.754 fused_ordering(549) 00:14:45.754 fused_ordering(550) 00:14:45.754 fused_ordering(551) 00:14:45.754 fused_ordering(552) 00:14:45.754 fused_ordering(553) 00:14:45.754 fused_ordering(554) 00:14:45.754 fused_ordering(555) 00:14:45.754 fused_ordering(556) 00:14:45.754 fused_ordering(557) 00:14:45.754 fused_ordering(558) 00:14:45.754 fused_ordering(559) 00:14:45.754 fused_ordering(560) 00:14:45.754 fused_ordering(561) 00:14:45.754 fused_ordering(562) 00:14:45.754 fused_ordering(563) 00:14:45.754 fused_ordering(564) 00:14:45.754 fused_ordering(565) 00:14:45.754 fused_ordering(566) 00:14:45.754 fused_ordering(567) 00:14:45.754 fused_ordering(568) 00:14:45.754 fused_ordering(569) 00:14:45.754 fused_ordering(570) 00:14:45.754 fused_ordering(571) 00:14:45.754 fused_ordering(572) 00:14:45.755 fused_ordering(573) 00:14:45.755 fused_ordering(574) 00:14:45.755 fused_ordering(575) 00:14:45.755 fused_ordering(576) 00:14:45.755 fused_ordering(577) 00:14:45.755 fused_ordering(578) 00:14:45.755 fused_ordering(579) 00:14:45.755 fused_ordering(580) 00:14:45.755 fused_ordering(581) 00:14:45.755 fused_ordering(582) 00:14:45.755 fused_ordering(583) 00:14:45.755 fused_ordering(584) 00:14:45.755 fused_ordering(585) 00:14:45.755 fused_ordering(586) 00:14:45.755 fused_ordering(587) 00:14:45.755 fused_ordering(588) 00:14:45.755 fused_ordering(589) 00:14:45.755 fused_ordering(590) 00:14:45.755 fused_ordering(591) 00:14:45.755 fused_ordering(592) 00:14:45.755 fused_ordering(593) 00:14:45.755 fused_ordering(594) 00:14:45.755 fused_ordering(595) 00:14:45.755 fused_ordering(596) 00:14:45.755 fused_ordering(597) 00:14:45.755 fused_ordering(598) 00:14:45.755 fused_ordering(599) 00:14:45.755 fused_ordering(600) 00:14:45.755 fused_ordering(601) 00:14:45.755 fused_ordering(602) 00:14:45.755 fused_ordering(603) 00:14:45.755 fused_ordering(604) 00:14:45.755 fused_ordering(605) 00:14:45.755 fused_ordering(606) 00:14:45.755 fused_ordering(607) 00:14:45.755 fused_ordering(608) 00:14:45.755 fused_ordering(609) 00:14:45.755 fused_ordering(610) 00:14:45.755 fused_ordering(611) 00:14:45.755 fused_ordering(612) 00:14:45.755 fused_ordering(613) 00:14:45.755 fused_ordering(614) 00:14:45.755 fused_ordering(615) 00:14:46.320 fused_ordering(616) 00:14:46.320 fused_ordering(617) 00:14:46.320 fused_ordering(618) 00:14:46.320 fused_ordering(619) 00:14:46.320 fused_ordering(620) 00:14:46.320 fused_ordering(621) 00:14:46.320 fused_ordering(622) 00:14:46.320 fused_ordering(623) 00:14:46.320 fused_ordering(624) 00:14:46.320 fused_ordering(625) 00:14:46.320 fused_ordering(626) 00:14:46.320 fused_ordering(627) 00:14:46.320 fused_ordering(628) 00:14:46.320 fused_ordering(629) 00:14:46.320 fused_ordering(630) 00:14:46.320 fused_ordering(631) 00:14:46.320 fused_ordering(632) 00:14:46.320 fused_ordering(633) 00:14:46.320 fused_ordering(634) 00:14:46.320 fused_ordering(635) 00:14:46.320 fused_ordering(636) 00:14:46.320 fused_ordering(637) 00:14:46.320 fused_ordering(638) 00:14:46.320 fused_ordering(639) 00:14:46.320 fused_ordering(640) 00:14:46.320 fused_ordering(641) 00:14:46.320 fused_ordering(642) 00:14:46.320 fused_ordering(643) 00:14:46.320 fused_ordering(644) 00:14:46.320 fused_ordering(645) 00:14:46.320 fused_ordering(646) 00:14:46.320 fused_ordering(647) 00:14:46.320 fused_ordering(648) 00:14:46.320 fused_ordering(649) 00:14:46.320 fused_ordering(650) 00:14:46.320 fused_ordering(651) 00:14:46.320 fused_ordering(652) 00:14:46.320 fused_ordering(653) 00:14:46.320 fused_ordering(654) 00:14:46.320 fused_ordering(655) 00:14:46.320 fused_ordering(656) 00:14:46.320 fused_ordering(657) 00:14:46.320 fused_ordering(658) 00:14:46.320 fused_ordering(659) 00:14:46.320 fused_ordering(660) 00:14:46.320 fused_ordering(661) 00:14:46.320 fused_ordering(662) 00:14:46.320 fused_ordering(663) 00:14:46.320 fused_ordering(664) 00:14:46.320 fused_ordering(665) 00:14:46.320 fused_ordering(666) 00:14:46.320 fused_ordering(667) 00:14:46.320 fused_ordering(668) 00:14:46.320 fused_ordering(669) 00:14:46.320 fused_ordering(670) 00:14:46.320 fused_ordering(671) 00:14:46.320 fused_ordering(672) 00:14:46.320 fused_ordering(673) 00:14:46.320 fused_ordering(674) 00:14:46.320 fused_ordering(675) 00:14:46.320 fused_ordering(676) 00:14:46.320 fused_ordering(677) 00:14:46.320 fused_ordering(678) 00:14:46.320 fused_ordering(679) 00:14:46.320 fused_ordering(680) 00:14:46.320 fused_ordering(681) 00:14:46.320 fused_ordering(682) 00:14:46.320 fused_ordering(683) 00:14:46.321 fused_ordering(684) 00:14:46.321 fused_ordering(685) 00:14:46.321 fused_ordering(686) 00:14:46.321 fused_ordering(687) 00:14:46.321 fused_ordering(688) 00:14:46.321 fused_ordering(689) 00:14:46.321 fused_ordering(690) 00:14:46.321 fused_ordering(691) 00:14:46.321 fused_ordering(692) 00:14:46.321 fused_ordering(693) 00:14:46.321 fused_ordering(694) 00:14:46.321 fused_ordering(695) 00:14:46.321 fused_ordering(696) 00:14:46.321 fused_ordering(697) 00:14:46.321 fused_ordering(698) 00:14:46.321 fused_ordering(699) 00:14:46.321 fused_ordering(700) 00:14:46.321 fused_ordering(701) 00:14:46.321 fused_ordering(702) 00:14:46.321 fused_ordering(703) 00:14:46.321 fused_ordering(704) 00:14:46.321 fused_ordering(705) 00:14:46.321 fused_ordering(706) 00:14:46.321 fused_ordering(707) 00:14:46.321 fused_ordering(708) 00:14:46.321 fused_ordering(709) 00:14:46.321 fused_ordering(710) 00:14:46.321 fused_ordering(711) 00:14:46.321 fused_ordering(712) 00:14:46.321 fused_ordering(713) 00:14:46.321 fused_ordering(714) 00:14:46.321 fused_ordering(715) 00:14:46.321 fused_ordering(716) 00:14:46.321 fused_ordering(717) 00:14:46.321 fused_ordering(718) 00:14:46.321 fused_ordering(719) 00:14:46.321 fused_ordering(720) 00:14:46.321 fused_ordering(721) 00:14:46.321 fused_ordering(722) 00:14:46.321 fused_ordering(723) 00:14:46.321 fused_ordering(724) 00:14:46.321 fused_ordering(725) 00:14:46.321 fused_ordering(726) 00:14:46.321 fused_ordering(727) 00:14:46.321 fused_ordering(728) 00:14:46.321 fused_ordering(729) 00:14:46.321 fused_ordering(730) 00:14:46.321 fused_ordering(731) 00:14:46.321 fused_ordering(732) 00:14:46.321 fused_ordering(733) 00:14:46.321 fused_ordering(734) 00:14:46.321 fused_ordering(735) 00:14:46.321 fused_ordering(736) 00:14:46.321 fused_ordering(737) 00:14:46.321 fused_ordering(738) 00:14:46.321 fused_ordering(739) 00:14:46.321 fused_ordering(740) 00:14:46.321 fused_ordering(741) 00:14:46.321 fused_ordering(742) 00:14:46.321 fused_ordering(743) 00:14:46.321 fused_ordering(744) 00:14:46.321 fused_ordering(745) 00:14:46.321 fused_ordering(746) 00:14:46.321 fused_ordering(747) 00:14:46.321 fused_ordering(748) 00:14:46.321 fused_ordering(749) 00:14:46.321 fused_ordering(750) 00:14:46.321 fused_ordering(751) 00:14:46.321 fused_ordering(752) 00:14:46.321 fused_ordering(753) 00:14:46.321 fused_ordering(754) 00:14:46.321 fused_ordering(755) 00:14:46.321 fused_ordering(756) 00:14:46.321 fused_ordering(757) 00:14:46.321 fused_ordering(758) 00:14:46.321 fused_ordering(759) 00:14:46.321 fused_ordering(760) 00:14:46.321 fused_ordering(761) 00:14:46.321 fused_ordering(762) 00:14:46.321 fused_ordering(763) 00:14:46.321 fused_ordering(764) 00:14:46.321 fused_ordering(765) 00:14:46.321 fused_ordering(766) 00:14:46.321 fused_ordering(767) 00:14:46.321 fused_ordering(768) 00:14:46.321 fused_ordering(769) 00:14:46.321 fused_ordering(770) 00:14:46.321 fused_ordering(771) 00:14:46.321 fused_ordering(772) 00:14:46.321 fused_ordering(773) 00:14:46.321 fused_ordering(774) 00:14:46.321 fused_ordering(775) 00:14:46.321 fused_ordering(776) 00:14:46.321 fused_ordering(777) 00:14:46.321 fused_ordering(778) 00:14:46.321 fused_ordering(779) 00:14:46.321 fused_ordering(780) 00:14:46.321 fused_ordering(781) 00:14:46.321 fused_ordering(782) 00:14:46.321 fused_ordering(783) 00:14:46.321 fused_ordering(784) 00:14:46.321 fused_ordering(785) 00:14:46.321 fused_ordering(786) 00:14:46.321 fused_ordering(787) 00:14:46.321 fused_ordering(788) 00:14:46.321 fused_ordering(789) 00:14:46.321 fused_ordering(790) 00:14:46.321 fused_ordering(791) 00:14:46.321 fused_ordering(792) 00:14:46.321 fused_ordering(793) 00:14:46.321 fused_ordering(794) 00:14:46.321 fused_ordering(795) 00:14:46.321 fused_ordering(796) 00:14:46.321 fused_ordering(797) 00:14:46.321 fused_ordering(798) 00:14:46.321 fused_ordering(799) 00:14:46.321 fused_ordering(800) 00:14:46.321 fused_ordering(801) 00:14:46.321 fused_ordering(802) 00:14:46.321 fused_ordering(803) 00:14:46.321 fused_ordering(804) 00:14:46.321 fused_ordering(805) 00:14:46.321 fused_ordering(806) 00:14:46.321 fused_ordering(807) 00:14:46.321 fused_ordering(808) 00:14:46.321 fused_ordering(809) 00:14:46.321 fused_ordering(810) 00:14:46.321 fused_ordering(811) 00:14:46.321 fused_ordering(812) 00:14:46.321 fused_ordering(813) 00:14:46.321 fused_ordering(814) 00:14:46.321 fused_ordering(815) 00:14:46.321 fused_ordering(816) 00:14:46.321 fused_ordering(817) 00:14:46.321 fused_ordering(818) 00:14:46.321 fused_ordering(819) 00:14:46.321 fused_ordering(820) 00:14:47.258 fused_ordering(821) 00:14:47.258 fused_ordering(822) 00:14:47.258 fused_ordering(823) 00:14:47.258 fused_ordering(824) 00:14:47.258 fused_ordering(825) 00:14:47.258 fused_ordering(826) 00:14:47.258 fused_ordering(827) 00:14:47.258 fused_ordering(828) 00:14:47.258 fused_ordering(829) 00:14:47.258 fused_ordering(830) 00:14:47.258 fused_ordering(831) 00:14:47.258 fused_ordering(832) 00:14:47.258 fused_ordering(833) 00:14:47.258 fused_ordering(834) 00:14:47.258 fused_ordering(835) 00:14:47.258 fused_ordering(836) 00:14:47.258 fused_ordering(837) 00:14:47.258 fused_ordering(838) 00:14:47.258 fused_ordering(839) 00:14:47.258 fused_ordering(840) 00:14:47.258 fused_ordering(841) 00:14:47.258 fused_ordering(842) 00:14:47.258 fused_ordering(843) 00:14:47.258 fused_ordering(844) 00:14:47.258 fused_ordering(845) 00:14:47.258 fused_ordering(846) 00:14:47.258 fused_ordering(847) 00:14:47.258 fused_ordering(848) 00:14:47.258 fused_ordering(849) 00:14:47.258 fused_ordering(850) 00:14:47.258 fused_ordering(851) 00:14:47.258 fused_ordering(852) 00:14:47.258 fused_ordering(853) 00:14:47.258 fused_ordering(854) 00:14:47.258 fused_ordering(855) 00:14:47.258 fused_ordering(856) 00:14:47.258 fused_ordering(857) 00:14:47.258 fused_ordering(858) 00:14:47.258 fused_ordering(859) 00:14:47.258 fused_ordering(860) 00:14:47.258 fused_ordering(861) 00:14:47.258 fused_ordering(862) 00:14:47.258 fused_ordering(863) 00:14:47.258 fused_ordering(864) 00:14:47.258 fused_ordering(865) 00:14:47.258 fused_ordering(866) 00:14:47.258 fused_ordering(867) 00:14:47.258 fused_ordering(868) 00:14:47.258 fused_ordering(869) 00:14:47.258 fused_ordering(870) 00:14:47.258 fused_ordering(871) 00:14:47.258 fused_ordering(872) 00:14:47.258 fused_ordering(873) 00:14:47.258 fused_ordering(874) 00:14:47.258 fused_ordering(875) 00:14:47.258 fused_ordering(876) 00:14:47.258 fused_ordering(877) 00:14:47.258 fused_ordering(878) 00:14:47.258 fused_ordering(879) 00:14:47.258 fused_ordering(880) 00:14:47.258 fused_ordering(881) 00:14:47.258 fused_ordering(882) 00:14:47.258 fused_ordering(883) 00:14:47.258 fused_ordering(884) 00:14:47.258 fused_ordering(885) 00:14:47.258 fused_ordering(886) 00:14:47.258 fused_ordering(887) 00:14:47.258 fused_ordering(888) 00:14:47.258 fused_ordering(889) 00:14:47.258 fused_ordering(890) 00:14:47.258 fused_ordering(891) 00:14:47.258 fused_ordering(892) 00:14:47.258 fused_ordering(893) 00:14:47.258 fused_ordering(894) 00:14:47.258 fused_ordering(895) 00:14:47.258 fused_ordering(896) 00:14:47.258 fused_ordering(897) 00:14:47.258 fused_ordering(898) 00:14:47.258 fused_ordering(899) 00:14:47.258 fused_ordering(900) 00:14:47.258 fused_ordering(901) 00:14:47.258 fused_ordering(902) 00:14:47.258 fused_ordering(903) 00:14:47.258 fused_ordering(904) 00:14:47.258 fused_ordering(905) 00:14:47.258 fused_ordering(906) 00:14:47.258 fused_ordering(907) 00:14:47.258 fused_ordering(908) 00:14:47.258 fused_ordering(909) 00:14:47.258 fused_ordering(910) 00:14:47.258 fused_ordering(911) 00:14:47.258 fused_ordering(912) 00:14:47.258 fused_ordering(913) 00:14:47.258 fused_ordering(914) 00:14:47.258 fused_ordering(915) 00:14:47.258 fused_ordering(916) 00:14:47.258 fused_ordering(917) 00:14:47.258 fused_ordering(918) 00:14:47.258 fused_ordering(919) 00:14:47.258 fused_ordering(920) 00:14:47.258 fused_ordering(921) 00:14:47.258 fused_ordering(922) 00:14:47.258 fused_ordering(923) 00:14:47.258 fused_ordering(924) 00:14:47.258 fused_ordering(925) 00:14:47.258 fused_ordering(926) 00:14:47.258 fused_ordering(927) 00:14:47.258 fused_ordering(928) 00:14:47.258 fused_ordering(929) 00:14:47.258 fused_ordering(930) 00:14:47.258 fused_ordering(931) 00:14:47.258 fused_ordering(932) 00:14:47.258 fused_ordering(933) 00:14:47.258 fused_ordering(934) 00:14:47.258 fused_ordering(935) 00:14:47.258 fused_ordering(936) 00:14:47.258 fused_ordering(937) 00:14:47.258 fused_ordering(938) 00:14:47.258 fused_ordering(939) 00:14:47.258 fused_ordering(940) 00:14:47.258 fused_ordering(941) 00:14:47.258 fused_ordering(942) 00:14:47.258 fused_ordering(943) 00:14:47.258 fused_ordering(944) 00:14:47.258 fused_ordering(945) 00:14:47.258 fused_ordering(946) 00:14:47.258 fused_ordering(947) 00:14:47.258 fused_ordering(948) 00:14:47.258 fused_ordering(949) 00:14:47.258 fused_ordering(950) 00:14:47.258 fused_ordering(951) 00:14:47.258 fused_ordering(952) 00:14:47.258 fused_ordering(953) 00:14:47.258 fused_ordering(954) 00:14:47.258 fused_ordering(955) 00:14:47.258 fused_ordering(956) 00:14:47.258 fused_ordering(957) 00:14:47.258 fused_ordering(958) 00:14:47.258 fused_ordering(959) 00:14:47.258 fused_ordering(960) 00:14:47.258 fused_ordering(961) 00:14:47.258 fused_ordering(962) 00:14:47.258 fused_ordering(963) 00:14:47.258 fused_ordering(964) 00:14:47.258 fused_ordering(965) 00:14:47.258 fused_ordering(966) 00:14:47.258 fused_ordering(967) 00:14:47.258 fused_ordering(968) 00:14:47.258 fused_ordering(969) 00:14:47.258 fused_ordering(970) 00:14:47.258 fused_ordering(971) 00:14:47.258 fused_ordering(972) 00:14:47.258 fused_ordering(973) 00:14:47.258 fused_ordering(974) 00:14:47.258 fused_ordering(975) 00:14:47.258 fused_ordering(976) 00:14:47.258 fused_ordering(977) 00:14:47.258 fused_ordering(978) 00:14:47.258 fused_ordering(979) 00:14:47.258 fused_ordering(980) 00:14:47.258 fused_ordering(981) 00:14:47.258 fused_ordering(982) 00:14:47.258 fused_ordering(983) 00:14:47.258 fused_ordering(984) 00:14:47.258 fused_ordering(985) 00:14:47.258 fused_ordering(986) 00:14:47.258 fused_ordering(987) 00:14:47.258 fused_ordering(988) 00:14:47.258 fused_ordering(989) 00:14:47.258 fused_ordering(990) 00:14:47.258 fused_ordering(991) 00:14:47.258 fused_ordering(992) 00:14:47.258 fused_ordering(993) 00:14:47.258 fused_ordering(994) 00:14:47.258 fused_ordering(995) 00:14:47.258 fused_ordering(996) 00:14:47.258 fused_ordering(997) 00:14:47.258 fused_ordering(998) 00:14:47.258 fused_ordering(999) 00:14:47.258 fused_ordering(1000) 00:14:47.258 fused_ordering(1001) 00:14:47.258 fused_ordering(1002) 00:14:47.258 fused_ordering(1003) 00:14:47.258 fused_ordering(1004) 00:14:47.258 fused_ordering(1005) 00:14:47.258 fused_ordering(1006) 00:14:47.258 fused_ordering(1007) 00:14:47.258 fused_ordering(1008) 00:14:47.258 fused_ordering(1009) 00:14:47.258 fused_ordering(1010) 00:14:47.258 fused_ordering(1011) 00:14:47.258 fused_ordering(1012) 00:14:47.258 fused_ordering(1013) 00:14:47.258 fused_ordering(1014) 00:14:47.258 fused_ordering(1015) 00:14:47.258 fused_ordering(1016) 00:14:47.258 fused_ordering(1017) 00:14:47.258 fused_ordering(1018) 00:14:47.258 fused_ordering(1019) 00:14:47.258 fused_ordering(1020) 00:14:47.258 fused_ordering(1021) 00:14:47.258 fused_ordering(1022) 00:14:47.258 fused_ordering(1023) 00:14:47.258 18:01:36 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:47.258 18:01:36 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:47.258 18:01:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:47.258 18:01:36 -- nvmf/common.sh@117 -- # sync 00:14:47.258 18:01:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:47.258 18:01:36 -- nvmf/common.sh@120 -- # set +e 00:14:47.258 18:01:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:47.258 18:01:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:47.258 rmmod nvme_tcp 00:14:47.258 rmmod nvme_fabrics 00:14:47.258 rmmod nvme_keyring 00:14:47.258 18:01:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:47.258 18:01:36 -- nvmf/common.sh@124 -- # set -e 00:14:47.258 18:01:36 -- nvmf/common.sh@125 -- # return 0 00:14:47.258 18:01:36 -- nvmf/common.sh@478 -- # '[' -n 3281846 ']' 00:14:47.258 18:01:36 -- nvmf/common.sh@479 -- # killprocess 3281846 00:14:47.258 18:01:36 -- common/autotest_common.sh@936 -- # '[' -z 3281846 ']' 00:14:47.259 18:01:36 -- common/autotest_common.sh@940 -- # kill -0 3281846 00:14:47.259 18:01:36 -- common/autotest_common.sh@941 -- # uname 00:14:47.259 18:01:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:47.259 18:01:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3281846 00:14:47.259 18:01:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:47.259 18:01:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:47.259 18:01:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3281846' 00:14:47.259 killing process with pid 3281846 00:14:47.259 18:01:36 -- common/autotest_common.sh@955 -- # kill 3281846 00:14:47.259 18:01:36 -- common/autotest_common.sh@960 -- # wait 3281846 00:14:47.517 18:01:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:47.517 18:01:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:47.517 18:01:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:47.517 18:01:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:47.517 18:01:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:47.517 18:01:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.517 18:01:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.517 18:01:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.055 18:01:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:50.055 00:14:50.055 real 0m8.651s 00:14:50.055 user 0m5.868s 00:14:50.055 sys 0m4.486s 00:14:50.055 18:01:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:50.055 18:01:38 -- common/autotest_common.sh@10 -- # set +x 00:14:50.055 ************************************ 00:14:50.055 END TEST nvmf_fused_ordering 00:14:50.055 ************************************ 00:14:50.055 18:01:38 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:50.055 18:01:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:50.055 18:01:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:50.055 18:01:38 -- common/autotest_common.sh@10 -- # set +x 00:14:50.055 ************************************ 00:14:50.055 START TEST nvmf_delete_subsystem 00:14:50.055 ************************************ 00:14:50.055 18:01:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:50.055 * Looking for test storage... 00:14:50.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.055 18:01:38 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.055 18:01:38 -- nvmf/common.sh@7 -- # uname -s 00:14:50.055 18:01:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.055 18:01:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.055 18:01:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.055 18:01:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.055 18:01:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.055 18:01:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.055 18:01:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.055 18:01:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.055 18:01:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.055 18:01:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.055 18:01:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:50.055 18:01:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:50.055 18:01:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.055 18:01:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.055 18:01:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.055 18:01:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.055 18:01:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.055 18:01:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.055 18:01:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.055 18:01:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.055 18:01:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.055 18:01:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.055 18:01:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.055 18:01:38 -- paths/export.sh@5 -- # export PATH 00:14:50.055 18:01:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.055 18:01:38 -- nvmf/common.sh@47 -- # : 0 00:14:50.055 18:01:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:50.055 18:01:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:50.055 18:01:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.055 18:01:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.055 18:01:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.055 18:01:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:50.055 18:01:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:50.055 18:01:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:50.055 18:01:38 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:50.055 18:01:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:50.055 18:01:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.055 18:01:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:50.055 18:01:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:50.055 18:01:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:50.055 18:01:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.055 18:01:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.055 18:01:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.055 18:01:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:50.055 18:01:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:50.055 18:01:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:50.055 18:01:38 -- common/autotest_common.sh@10 -- # set +x 00:14:52.630 18:01:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:52.630 18:01:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:52.630 18:01:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:52.630 18:01:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:52.630 18:01:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:52.630 18:01:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:52.630 18:01:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:52.630 18:01:40 -- nvmf/common.sh@295 -- # net_devs=() 00:14:52.630 18:01:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:52.630 18:01:40 -- nvmf/common.sh@296 -- # e810=() 00:14:52.630 18:01:40 -- nvmf/common.sh@296 -- # local -ga e810 00:14:52.630 18:01:40 -- nvmf/common.sh@297 -- # x722=() 00:14:52.630 18:01:40 -- nvmf/common.sh@297 -- # local -ga x722 00:14:52.630 18:01:40 -- nvmf/common.sh@298 -- # mlx=() 00:14:52.630 18:01:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:52.630 18:01:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:52.630 18:01:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:52.630 18:01:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:52.630 18:01:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:52.630 18:01:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:52.630 18:01:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:52.630 18:01:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:52.630 18:01:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:52.630 18:01:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:52.630 18:01:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:52.630 18:01:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:52.630 18:01:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:52.630 18:01:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:52.630 18:01:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:52.630 18:01:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:52.630 18:01:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:52.630 18:01:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:52.630 18:01:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:52.630 18:01:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:52.630 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:52.630 18:01:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:52.630 18:01:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:52.630 18:01:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.630 18:01:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.630 18:01:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:52.630 18:01:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:52.630 18:01:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:52.630 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:52.630 18:01:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:52.630 18:01:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:52.630 18:01:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.630 18:01:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.630 18:01:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:52.630 18:01:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:52.630 18:01:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:52.630 18:01:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:52.630 18:01:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:52.630 18:01:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.630 18:01:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:52.630 18:01:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.630 18:01:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:52.630 Found net devices under 0000:84:00.0: cvl_0_0 00:14:52.630 18:01:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.631 18:01:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:52.631 18:01:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.631 18:01:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:52.631 18:01:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.631 18:01:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:52.631 Found net devices under 0000:84:00.1: cvl_0_1 00:14:52.631 18:01:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.631 18:01:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:52.631 18:01:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:52.631 18:01:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:52.631 18:01:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:52.631 18:01:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:52.631 18:01:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.631 18:01:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.631 18:01:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:52.631 18:01:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:52.631 18:01:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:52.631 18:01:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:52.631 18:01:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:52.631 18:01:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:52.631 18:01:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.631 18:01:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:52.631 18:01:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:52.631 18:01:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:52.631 18:01:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:52.631 18:01:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:52.631 18:01:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:52.631 18:01:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:52.631 18:01:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:52.631 18:01:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:52.631 18:01:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:52.631 18:01:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:52.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:14:52.631 00:14:52.631 --- 10.0.0.2 ping statistics --- 00:14:52.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.631 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:14:52.631 18:01:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:52.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:14:52.631 00:14:52.631 --- 10.0.0.1 ping statistics --- 00:14:52.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.631 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:14:52.631 18:01:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.631 18:01:41 -- nvmf/common.sh@411 -- # return 0 00:14:52.631 18:01:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:52.631 18:01:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.631 18:01:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:52.631 18:01:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:52.631 18:01:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.631 18:01:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:52.631 18:01:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:52.631 18:01:41 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:52.631 18:01:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:52.631 18:01:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:52.631 18:01:41 -- common/autotest_common.sh@10 -- # set +x 00:14:52.631 18:01:41 -- nvmf/common.sh@470 -- # nvmfpid=3284222 00:14:52.631 18:01:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:52.631 18:01:41 -- nvmf/common.sh@471 -- # waitforlisten 3284222 00:14:52.631 18:01:41 -- common/autotest_common.sh@817 -- # '[' -z 3284222 ']' 00:14:52.631 18:01:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.631 18:01:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:52.631 18:01:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.631 18:01:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:52.631 18:01:41 -- common/autotest_common.sh@10 -- # set +x 00:14:52.631 [2024-04-15 18:01:41.190087] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:14:52.631 [2024-04-15 18:01:41.190176] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.631 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.631 [2024-04-15 18:01:41.268086] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:52.631 [2024-04-15 18:01:41.363437] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.631 [2024-04-15 18:01:41.363498] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.631 [2024-04-15 18:01:41.363515] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.631 [2024-04-15 18:01:41.363529] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.631 [2024-04-15 18:01:41.363541] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.631 [2024-04-15 18:01:41.363631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.631 [2024-04-15 18:01:41.363638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.631 18:01:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:52.631 18:01:41 -- common/autotest_common.sh@850 -- # return 0 00:14:52.631 18:01:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:52.631 18:01:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:52.631 18:01:41 -- common/autotest_common.sh@10 -- # set +x 00:14:52.631 18:01:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.631 18:01:41 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:52.631 18:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:52.631 18:01:41 -- common/autotest_common.sh@10 -- # set +x 00:14:52.631 [2024-04-15 18:01:41.517664] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.631 18:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:52.631 18:01:41 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:52.631 18:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:52.631 18:01:41 -- common/autotest_common.sh@10 -- # set +x 00:14:52.631 18:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:52.631 18:01:41 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.631 18:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:52.631 18:01:41 -- common/autotest_common.sh@10 -- # set +x 00:14:52.631 [2024-04-15 18:01:41.533890] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.631 18:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:52.631 18:01:41 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:52.631 18:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:52.631 18:01:41 -- common/autotest_common.sh@10 -- # set +x 00:14:52.631 NULL1 00:14:52.631 18:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:52.631 18:01:41 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:52.631 18:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:52.631 18:01:41 -- common/autotest_common.sh@10 -- # set +x 00:14:52.631 Delay0 00:14:52.631 18:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:52.631 18:01:41 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.631 18:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:52.631 18:01:41 -- common/autotest_common.sh@10 -- # set +x 00:14:52.631 18:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:52.631 18:01:41 -- target/delete_subsystem.sh@28 -- # perf_pid=3284363 00:14:52.631 18:01:41 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:52.631 18:01:41 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:52.890 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.890 [2024-04-15 18:01:41.638672] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:54.810 18:01:43 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:54.811 18:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:54.811 18:01:43 -- common/autotest_common.sh@10 -- # set +x 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 [2024-04-15 18:01:43.730250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb250000c00 is same with the state(5) to be set 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 starting I/O failed: -6 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 [2024-04-15 18:01:43.730989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a410 is same with the state(5) to be set 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.811 Write completed with error (sct=0, sc=8) 00:14:54.811 Read completed with error (sct=0, sc=8) 00:14:54.812 Write completed with error (sct=0, sc=8) 00:14:54.812 Write completed with error (sct=0, sc=8) 00:14:54.812 Write completed with error (sct=0, sc=8) 00:14:54.812 Read completed with error (sct=0, sc=8) 00:14:54.812 Write completed with error (sct=0, sc=8) 00:14:54.812 Read completed with error (sct=0, sc=8) 00:14:54.812 Write completed with error (sct=0, sc=8) 00:14:54.812 Write completed with error (sct=0, sc=8) 00:14:54.812 Read completed with error (sct=0, sc=8) 00:14:54.812 Read completed with error (sct=0, sc=8) 00:14:54.812 Write completed with error (sct=0, sc=8) 00:14:54.812 Read completed with error (sct=0, sc=8) 00:14:54.812 Write completed with error (sct=0, sc=8) 00:14:54.812 Write completed with error (sct=0, sc=8) 00:14:54.812 Write completed with error (sct=0, sc=8) 00:14:54.812 Write completed with error (sct=0, sc=8) 00:14:54.812 Write completed with error (sct=0, sc=8) 00:14:54.812 Read completed with error (sct=0, sc=8) 00:14:54.812 Read completed with error (sct=0, sc=8) 00:14:54.812 Read completed with error (sct=0, sc=8) 00:14:54.812 Read completed with error (sct=0, sc=8) 00:14:54.812 Write completed with error (sct=0, sc=8) 00:14:54.812 Read completed with error (sct=0, sc=8) 00:14:54.812 Read completed with error (sct=0, sc=8) 00:14:54.812 Read completed with error (sct=0, sc=8) 00:14:54.812 Read completed with error (sct=0, sc=8) 00:14:55.745 [2024-04-15 18:01:44.695553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d8e0 is same with the state(5) to be set 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 [2024-04-15 18:01:44.731668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a020 is same with the state(5) to be set 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 [2024-04-15 18:01:44.732908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb25000bf90 is same with the state(5) to be set 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 [2024-04-15 18:01:44.733139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a6d0 is same with the state(5) to be set 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 Write completed with error (sct=0, sc=8) 00:14:56.004 Read completed with error (sct=0, sc=8) 00:14:56.004 [2024-04-15 18:01:44.733290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb25000c690 is same with the state(5) to be set 00:14:56.004 18:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.004 18:01:44 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:56.004 18:01:44 -- target/delete_subsystem.sh@35 -- # kill -0 3284363 00:14:56.004 [2024-04-15 18:01:44.734026] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8d8e0 (9): Bad file descriptor 00:14:56.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:56.004 18:01:44 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:56.004 Initializing NVMe Controllers 00:14:56.004 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:56.004 Controller IO queue size 128, less than required. 00:14:56.004 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:56.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:56.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:56.004 Initialization complete. Launching workers. 00:14:56.004 ======================================================== 00:14:56.004 Latency(us) 00:14:56.004 Device Information : IOPS MiB/s Average min max 00:14:56.004 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.14 0.08 904942.24 483.12 2002855.08 00:14:56.004 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.74 0.08 937917.56 613.97 2002795.48 00:14:56.004 ======================================================== 00:14:56.004 Total : 331.87 0.16 920813.77 483.12 2002855.08 00:14:56.004 00:14:56.572 18:01:45 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:56.572 18:01:45 -- target/delete_subsystem.sh@35 -- # kill -0 3284363 00:14:56.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3284363) - No such process 00:14:56.572 18:01:45 -- target/delete_subsystem.sh@45 -- # NOT wait 3284363 00:14:56.572 18:01:45 -- common/autotest_common.sh@638 -- # local es=0 00:14:56.572 18:01:45 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 3284363 00:14:56.572 18:01:45 -- common/autotest_common.sh@626 -- # local arg=wait 00:14:56.572 18:01:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:56.572 18:01:45 -- common/autotest_common.sh@630 -- # type -t wait 00:14:56.572 18:01:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:56.572 18:01:45 -- common/autotest_common.sh@641 -- # wait 3284363 00:14:56.572 18:01:45 -- common/autotest_common.sh@641 -- # es=1 00:14:56.572 18:01:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:56.572 18:01:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:56.572 18:01:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:56.572 18:01:45 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:56.572 18:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.572 18:01:45 -- common/autotest_common.sh@10 -- # set +x 00:14:56.572 18:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.572 18:01:45 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.572 18:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.572 18:01:45 -- common/autotest_common.sh@10 -- # set +x 00:14:56.572 [2024-04-15 18:01:45.255182] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.572 18:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.572 18:01:45 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.572 18:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.572 18:01:45 -- common/autotest_common.sh@10 -- # set +x 00:14:56.572 18:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.572 18:01:45 -- target/delete_subsystem.sh@54 -- # perf_pid=3284764 00:14:56.572 18:01:45 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:56.572 18:01:45 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:56.572 18:01:45 -- target/delete_subsystem.sh@57 -- # kill -0 3284764 00:14:56.572 18:01:45 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:56.572 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.572 [2024-04-15 18:01:45.322233] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:56.831 18:01:45 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:56.831 18:01:45 -- target/delete_subsystem.sh@57 -- # kill -0 3284764 00:14:56.831 18:01:45 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:57.397 18:01:46 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:57.397 18:01:46 -- target/delete_subsystem.sh@57 -- # kill -0 3284764 00:14:57.397 18:01:46 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:57.963 18:01:46 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:57.963 18:01:46 -- target/delete_subsystem.sh@57 -- # kill -0 3284764 00:14:57.963 18:01:46 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:58.530 18:01:47 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:58.530 18:01:47 -- target/delete_subsystem.sh@57 -- # kill -0 3284764 00:14:58.530 18:01:47 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:59.096 18:01:47 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:59.096 18:01:47 -- target/delete_subsystem.sh@57 -- # kill -0 3284764 00:14:59.096 18:01:47 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:59.354 18:01:48 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:59.354 18:01:48 -- target/delete_subsystem.sh@57 -- # kill -0 3284764 00:14:59.354 18:01:48 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:59.612 Initializing NVMe Controllers 00:14:59.612 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:59.612 Controller IO queue size 128, less than required. 00:14:59.612 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:59.612 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:59.612 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:59.612 Initialization complete. Launching workers. 00:14:59.612 ======================================================== 00:14:59.612 Latency(us) 00:14:59.612 Device Information : IOPS MiB/s Average min max 00:14:59.612 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003636.40 1000243.06 1010623.97 00:14:59.612 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006332.63 1000496.80 1042606.56 00:14:59.612 ======================================================== 00:14:59.612 Total : 256.00 0.12 1004984.52 1000243.06 1042606.56 00:14:59.612 00:14:59.870 18:01:48 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:59.871 18:01:48 -- target/delete_subsystem.sh@57 -- # kill -0 3284764 00:14:59.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3284764) - No such process 00:14:59.871 18:01:48 -- target/delete_subsystem.sh@67 -- # wait 3284764 00:14:59.871 18:01:48 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:59.871 18:01:48 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:59.871 18:01:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:59.871 18:01:48 -- nvmf/common.sh@117 -- # sync 00:14:59.871 18:01:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:59.871 18:01:48 -- nvmf/common.sh@120 -- # set +e 00:14:59.871 18:01:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:59.871 18:01:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:59.871 rmmod nvme_tcp 00:14:59.871 rmmod nvme_fabrics 00:15:00.130 rmmod nvme_keyring 00:15:00.130 18:01:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:00.130 18:01:48 -- nvmf/common.sh@124 -- # set -e 00:15:00.130 18:01:48 -- nvmf/common.sh@125 -- # return 0 00:15:00.130 18:01:48 -- nvmf/common.sh@478 -- # '[' -n 3284222 ']' 00:15:00.130 18:01:48 -- nvmf/common.sh@479 -- # killprocess 3284222 00:15:00.130 18:01:48 -- common/autotest_common.sh@936 -- # '[' -z 3284222 ']' 00:15:00.130 18:01:48 -- common/autotest_common.sh@940 -- # kill -0 3284222 00:15:00.130 18:01:48 -- common/autotest_common.sh@941 -- # uname 00:15:00.130 18:01:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:00.130 18:01:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3284222 00:15:00.130 18:01:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:00.130 18:01:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:00.130 18:01:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3284222' 00:15:00.130 killing process with pid 3284222 00:15:00.130 18:01:48 -- common/autotest_common.sh@955 -- # kill 3284222 00:15:00.130 18:01:48 -- common/autotest_common.sh@960 -- # wait 3284222 00:15:00.389 18:01:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:00.389 18:01:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:00.389 18:01:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:00.389 18:01:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:00.389 18:01:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:00.389 18:01:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.389 18:01:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.389 18:01:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.294 18:01:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:02.294 00:15:02.294 real 0m12.544s 00:15:02.294 user 0m27.697s 00:15:02.294 sys 0m3.283s 00:15:02.294 18:01:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:02.294 18:01:51 -- common/autotest_common.sh@10 -- # set +x 00:15:02.294 ************************************ 00:15:02.294 END TEST nvmf_delete_subsystem 00:15:02.294 ************************************ 00:15:02.294 18:01:51 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:02.294 18:01:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:02.294 18:01:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:02.294 18:01:51 -- common/autotest_common.sh@10 -- # set +x 00:15:02.551 ************************************ 00:15:02.551 START TEST nvmf_ns_masking 00:15:02.551 ************************************ 00:15:02.551 18:01:51 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:02.551 * Looking for test storage... 00:15:02.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:02.551 18:01:51 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.551 18:01:51 -- nvmf/common.sh@7 -- # uname -s 00:15:02.551 18:01:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.551 18:01:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.551 18:01:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.551 18:01:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.551 18:01:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.551 18:01:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.551 18:01:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.551 18:01:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.551 18:01:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.551 18:01:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.551 18:01:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:02.551 18:01:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:02.551 18:01:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.551 18:01:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.551 18:01:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.551 18:01:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.551 18:01:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.551 18:01:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.551 18:01:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.552 18:01:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.552 18:01:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.552 18:01:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.552 18:01:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.552 18:01:51 -- paths/export.sh@5 -- # export PATH 00:15:02.552 18:01:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.552 18:01:51 -- nvmf/common.sh@47 -- # : 0 00:15:02.552 18:01:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:02.552 18:01:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:02.552 18:01:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.552 18:01:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.552 18:01:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.552 18:01:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:02.552 18:01:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:02.552 18:01:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:02.552 18:01:51 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.552 18:01:51 -- target/ns_masking.sh@11 -- # loops=5 00:15:02.552 18:01:51 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:02.552 18:01:51 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:15:02.552 18:01:51 -- target/ns_masking.sh@15 -- # uuidgen 00:15:02.552 18:01:51 -- target/ns_masking.sh@15 -- # HOSTID=b9e5c0a4-6fba-42d5-9c58-dd1de9eb1801 00:15:02.552 18:01:51 -- target/ns_masking.sh@44 -- # nvmftestinit 00:15:02.552 18:01:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:02.552 18:01:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.552 18:01:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:02.552 18:01:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:02.552 18:01:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:02.552 18:01:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.552 18:01:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.552 18:01:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.552 18:01:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:02.552 18:01:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:02.552 18:01:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:02.552 18:01:51 -- common/autotest_common.sh@10 -- # set +x 00:15:05.093 18:01:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:05.093 18:01:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:05.093 18:01:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:05.093 18:01:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:05.093 18:01:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:05.093 18:01:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:05.093 18:01:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:05.093 18:01:53 -- nvmf/common.sh@295 -- # net_devs=() 00:15:05.093 18:01:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:05.093 18:01:53 -- nvmf/common.sh@296 -- # e810=() 00:15:05.093 18:01:53 -- nvmf/common.sh@296 -- # local -ga e810 00:15:05.093 18:01:53 -- nvmf/common.sh@297 -- # x722=() 00:15:05.093 18:01:53 -- nvmf/common.sh@297 -- # local -ga x722 00:15:05.093 18:01:53 -- nvmf/common.sh@298 -- # mlx=() 00:15:05.093 18:01:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:05.093 18:01:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:05.093 18:01:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:05.093 18:01:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:05.093 18:01:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:05.093 18:01:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:05.093 18:01:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:05.093 18:01:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:05.093 18:01:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:05.093 18:01:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:05.093 18:01:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:05.093 18:01:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:05.093 18:01:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:05.093 18:01:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:05.093 18:01:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:05.093 18:01:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:05.093 18:01:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:05.093 18:01:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:05.093 18:01:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:05.093 18:01:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:05.093 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:05.093 18:01:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:05.093 18:01:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:05.093 18:01:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.093 18:01:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.093 18:01:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:05.093 18:01:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:05.093 18:01:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:05.093 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:05.093 18:01:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:05.093 18:01:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:05.093 18:01:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.093 18:01:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.093 18:01:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:05.093 18:01:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:05.093 18:01:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:05.093 18:01:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:05.093 18:01:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:05.093 18:01:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.093 18:01:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:05.093 18:01:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.093 18:01:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:05.093 Found net devices under 0000:84:00.0: cvl_0_0 00:15:05.093 18:01:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.093 18:01:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:05.093 18:01:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.093 18:01:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:05.093 18:01:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.093 18:01:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:05.093 Found net devices under 0000:84:00.1: cvl_0_1 00:15:05.093 18:01:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.093 18:01:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:05.093 18:01:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:05.093 18:01:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:05.093 18:01:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:05.093 18:01:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:05.093 18:01:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.093 18:01:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:05.093 18:01:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:05.093 18:01:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:05.093 18:01:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:05.093 18:01:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:05.093 18:01:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:05.093 18:01:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:05.093 18:01:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.093 18:01:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:05.093 18:01:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:05.093 18:01:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:05.093 18:01:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:05.094 18:01:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:05.094 18:01:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:05.094 18:01:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:05.094 18:01:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:05.094 18:01:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:05.094 18:01:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:05.094 18:01:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:05.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:15:05.094 00:15:05.094 --- 10.0.0.2 ping statistics --- 00:15:05.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.094 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:15:05.094 18:01:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:05.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:15:05.094 00:15:05.094 --- 10.0.0.1 ping statistics --- 00:15:05.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.094 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:15:05.094 18:01:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.094 18:01:53 -- nvmf/common.sh@411 -- # return 0 00:15:05.094 18:01:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:05.094 18:01:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.094 18:01:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:05.094 18:01:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:05.094 18:01:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.094 18:01:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:05.094 18:01:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:05.094 18:01:53 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:15:05.094 18:01:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:05.094 18:01:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:05.094 18:01:53 -- common/autotest_common.sh@10 -- # set +x 00:15:05.094 18:01:53 -- nvmf/common.sh@470 -- # nvmfpid=3287132 00:15:05.094 18:01:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:05.094 18:01:53 -- nvmf/common.sh@471 -- # waitforlisten 3287132 00:15:05.094 18:01:53 -- common/autotest_common.sh@817 -- # '[' -z 3287132 ']' 00:15:05.094 18:01:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.094 18:01:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:05.094 18:01:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.094 18:01:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:05.094 18:01:53 -- common/autotest_common.sh@10 -- # set +x 00:15:05.094 [2024-04-15 18:01:53.745687] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:05.094 [2024-04-15 18:01:53.745866] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.094 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.094 [2024-04-15 18:01:53.871300] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:05.094 [2024-04-15 18:01:53.966613] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.094 [2024-04-15 18:01:53.966686] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.094 [2024-04-15 18:01:53.966703] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.094 [2024-04-15 18:01:53.966719] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.094 [2024-04-15 18:01:53.966731] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.094 [2024-04-15 18:01:53.966829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.094 [2024-04-15 18:01:53.966882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:05.094 [2024-04-15 18:01:53.966934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:05.094 [2024-04-15 18:01:53.966937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.352 18:01:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:05.352 18:01:54 -- common/autotest_common.sh@850 -- # return 0 00:15:05.352 18:01:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:05.352 18:01:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:05.352 18:01:54 -- common/autotest_common.sh@10 -- # set +x 00:15:05.352 18:01:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.352 18:01:54 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:05.610 [2024-04-15 18:01:54.436014] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.610 18:01:54 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:15:05.610 18:01:54 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:15:05.610 18:01:54 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:05.868 Malloc1 00:15:05.868 18:01:54 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:06.434 Malloc2 00:15:06.434 18:01:55 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:07.018 18:01:55 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:07.587 18:01:56 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.187 [2024-04-15 18:01:56.877837] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.187 18:01:56 -- target/ns_masking.sh@61 -- # connect 00:15:08.187 18:01:56 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b9e5c0a4-6fba-42d5-9c58-dd1de9eb1801 -a 10.0.0.2 -s 4420 -i 4 00:15:08.187 18:01:57 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:15:08.187 18:01:57 -- common/autotest_common.sh@1184 -- # local i=0 00:15:08.187 18:01:57 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:08.187 18:01:57 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:08.187 18:01:57 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:10.719 18:01:59 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:10.719 18:01:59 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:10.719 18:01:59 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:10.719 18:01:59 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:10.719 18:01:59 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:10.719 18:01:59 -- common/autotest_common.sh@1194 -- # return 0 00:15:10.719 18:01:59 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:10.719 18:01:59 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:10.719 18:01:59 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:10.719 18:01:59 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:10.719 18:01:59 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:15:10.719 18:01:59 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:10.719 18:01:59 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:10.719 [ 0]:0x1 00:15:10.719 18:01:59 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:10.719 18:01:59 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:10.719 18:01:59 -- target/ns_masking.sh@40 -- # nguid=6fc89c457f074507b11c9ea666ffd159 00:15:10.719 18:01:59 -- target/ns_masking.sh@41 -- # [[ 6fc89c457f074507b11c9ea666ffd159 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.719 18:01:59 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:10.719 18:01:59 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:10.719 18:01:59 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:10.719 18:01:59 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:10.719 [ 0]:0x1 00:15:10.719 18:01:59 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:10.719 18:01:59 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:10.720 18:01:59 -- target/ns_masking.sh@40 -- # nguid=6fc89c457f074507b11c9ea666ffd159 00:15:10.720 18:01:59 -- target/ns_masking.sh@41 -- # [[ 6fc89c457f074507b11c9ea666ffd159 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.720 18:01:59 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:10.720 18:01:59 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:10.720 18:01:59 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:10.720 [ 1]:0x2 00:15:10.720 18:01:59 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:10.720 18:01:59 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:10.720 18:01:59 -- target/ns_masking.sh@40 -- # nguid=b41fc141e4074735b810b785a94c65ed 00:15:10.720 18:01:59 -- target/ns_masking.sh@41 -- # [[ b41fc141e4074735b810b785a94c65ed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.720 18:01:59 -- target/ns_masking.sh@69 -- # disconnect 00:15:10.720 18:01:59 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:10.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.977 18:01:59 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.235 18:02:00 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:11.493 18:02:00 -- target/ns_masking.sh@77 -- # connect 1 00:15:11.493 18:02:00 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b9e5c0a4-6fba-42d5-9c58-dd1de9eb1801 -a 10.0.0.2 -s 4420 -i 4 00:15:11.752 18:02:00 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:11.752 18:02:00 -- common/autotest_common.sh@1184 -- # local i=0 00:15:11.752 18:02:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.752 18:02:00 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:15:11.752 18:02:00 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:15:11.752 18:02:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:13.657 18:02:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:13.657 18:02:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:13.657 18:02:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:13.657 18:02:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:13.657 18:02:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:13.657 18:02:02 -- common/autotest_common.sh@1194 -- # return 0 00:15:13.657 18:02:02 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:13.657 18:02:02 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:13.915 18:02:02 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:13.915 18:02:02 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:13.915 18:02:02 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:13.915 18:02:02 -- common/autotest_common.sh@638 -- # local es=0 00:15:13.915 18:02:02 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:13.915 18:02:02 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:13.915 18:02:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:13.915 18:02:02 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:13.915 18:02:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:13.915 18:02:02 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:13.915 18:02:02 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:13.915 18:02:02 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:13.915 18:02:02 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:13.915 18:02:02 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:13.915 18:02:02 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:13.915 18:02:02 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:13.915 18:02:02 -- common/autotest_common.sh@641 -- # es=1 00:15:13.915 18:02:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:13.915 18:02:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:13.915 18:02:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:13.915 18:02:02 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:13.915 18:02:02 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:13.915 18:02:02 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:13.915 [ 0]:0x2 00:15:13.915 18:02:02 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:13.915 18:02:02 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:13.915 18:02:02 -- target/ns_masking.sh@40 -- # nguid=b41fc141e4074735b810b785a94c65ed 00:15:13.915 18:02:02 -- target/ns_masking.sh@41 -- # [[ b41fc141e4074735b810b785a94c65ed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:13.915 18:02:02 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:14.173 18:02:03 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:14.173 18:02:03 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:14.173 18:02:03 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:14.173 [ 0]:0x1 00:15:14.173 18:02:03 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:14.173 18:02:03 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:14.431 18:02:03 -- target/ns_masking.sh@40 -- # nguid=6fc89c457f074507b11c9ea666ffd159 00:15:14.431 18:02:03 -- target/ns_masking.sh@41 -- # [[ 6fc89c457f074507b11c9ea666ffd159 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.431 18:02:03 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:14.431 18:02:03 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:14.431 18:02:03 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:14.431 [ 1]:0x2 00:15:14.431 18:02:03 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:14.431 18:02:03 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:14.431 18:02:03 -- target/ns_masking.sh@40 -- # nguid=b41fc141e4074735b810b785a94c65ed 00:15:14.431 18:02:03 -- target/ns_masking.sh@41 -- # [[ b41fc141e4074735b810b785a94c65ed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.431 18:02:03 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:14.689 18:02:03 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:14.689 18:02:03 -- common/autotest_common.sh@638 -- # local es=0 00:15:14.689 18:02:03 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:14.689 18:02:03 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:14.689 18:02:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:14.689 18:02:03 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:14.689 18:02:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:14.689 18:02:03 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:14.689 18:02:03 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:14.689 18:02:03 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:14.689 18:02:03 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:14.689 18:02:03 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:14.689 18:02:03 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:14.689 18:02:03 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.689 18:02:03 -- common/autotest_common.sh@641 -- # es=1 00:15:14.689 18:02:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:14.689 18:02:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:14.689 18:02:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:14.689 18:02:03 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:14.689 18:02:03 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:14.689 18:02:03 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:14.689 [ 0]:0x2 00:15:14.689 18:02:03 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:14.689 18:02:03 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:14.689 18:02:03 -- target/ns_masking.sh@40 -- # nguid=b41fc141e4074735b810b785a94c65ed 00:15:14.689 18:02:03 -- target/ns_masking.sh@41 -- # [[ b41fc141e4074735b810b785a94c65ed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.689 18:02:03 -- target/ns_masking.sh@91 -- # disconnect 00:15:14.689 18:02:03 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:14.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.689 18:02:03 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:14.948 18:02:03 -- target/ns_masking.sh@95 -- # connect 2 00:15:14.948 18:02:03 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b9e5c0a4-6fba-42d5-9c58-dd1de9eb1801 -a 10.0.0.2 -s 4420 -i 4 00:15:15.206 18:02:03 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:15.206 18:02:03 -- common/autotest_common.sh@1184 -- # local i=0 00:15:15.206 18:02:03 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:15.206 18:02:03 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:15:15.206 18:02:03 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:15:15.206 18:02:03 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:17.113 18:02:05 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:17.113 18:02:05 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:17.113 18:02:05 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:17.113 18:02:06 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:15:17.113 18:02:06 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:17.113 18:02:06 -- common/autotest_common.sh@1194 -- # return 0 00:15:17.113 18:02:06 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:17.113 18:02:06 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:17.113 18:02:06 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:17.113 18:02:06 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:17.113 18:02:06 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:17.113 18:02:06 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:17.113 18:02:06 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:17.113 [ 0]:0x1 00:15:17.113 18:02:06 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:17.113 18:02:06 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:17.371 18:02:06 -- target/ns_masking.sh@40 -- # nguid=6fc89c457f074507b11c9ea666ffd159 00:15:17.371 18:02:06 -- target/ns_masking.sh@41 -- # [[ 6fc89c457f074507b11c9ea666ffd159 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.371 18:02:06 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:17.371 18:02:06 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:17.371 18:02:06 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:17.371 [ 1]:0x2 00:15:17.371 18:02:06 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:17.371 18:02:06 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:17.371 18:02:06 -- target/ns_masking.sh@40 -- # nguid=b41fc141e4074735b810b785a94c65ed 00:15:17.371 18:02:06 -- target/ns_masking.sh@41 -- # [[ b41fc141e4074735b810b785a94c65ed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.371 18:02:06 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:17.630 18:02:06 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:17.630 18:02:06 -- common/autotest_common.sh@638 -- # local es=0 00:15:17.630 18:02:06 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:17.630 18:02:06 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:17.630 18:02:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:17.630 18:02:06 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:17.630 18:02:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:17.630 18:02:06 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:17.630 18:02:06 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:17.630 18:02:06 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:17.630 18:02:06 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:17.630 18:02:06 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:17.630 18:02:06 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:17.630 18:02:06 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.630 18:02:06 -- common/autotest_common.sh@641 -- # es=1 00:15:17.630 18:02:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:17.630 18:02:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:17.630 18:02:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:17.630 18:02:06 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:17.630 18:02:06 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:17.630 18:02:06 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:17.630 [ 0]:0x2 00:15:17.630 18:02:06 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:17.630 18:02:06 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:17.890 18:02:06 -- target/ns_masking.sh@40 -- # nguid=b41fc141e4074735b810b785a94c65ed 00:15:17.890 18:02:06 -- target/ns_masking.sh@41 -- # [[ b41fc141e4074735b810b785a94c65ed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.890 18:02:06 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:17.890 18:02:06 -- common/autotest_common.sh@638 -- # local es=0 00:15:17.890 18:02:06 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:17.890 18:02:06 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.890 18:02:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:17.890 18:02:06 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.890 18:02:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:17.890 18:02:06 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.890 18:02:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:17.890 18:02:06 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.890 18:02:06 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:17.890 18:02:06 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:18.458 [2024-04-15 18:02:07.164131] nvmf_rpc.c:1770:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:18.458 request: 00:15:18.458 { 00:15:18.458 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.458 "nsid": 2, 00:15:18.458 "host": "nqn.2016-06.io.spdk:host1", 00:15:18.458 "method": "nvmf_ns_remove_host", 00:15:18.458 "req_id": 1 00:15:18.458 } 00:15:18.458 Got JSON-RPC error response 00:15:18.458 response: 00:15:18.458 { 00:15:18.458 "code": -32602, 00:15:18.458 "message": "Invalid parameters" 00:15:18.458 } 00:15:18.458 18:02:07 -- common/autotest_common.sh@641 -- # es=1 00:15:18.458 18:02:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:18.458 18:02:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:18.458 18:02:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:18.458 18:02:07 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:18.458 18:02:07 -- common/autotest_common.sh@638 -- # local es=0 00:15:18.458 18:02:07 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:18.458 18:02:07 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:18.459 18:02:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:18.459 18:02:07 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:18.459 18:02:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:18.459 18:02:07 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:18.459 18:02:07 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:18.459 18:02:07 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:18.459 18:02:07 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:18.459 18:02:07 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:18.459 18:02:07 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:18.459 18:02:07 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:18.459 18:02:07 -- common/autotest_common.sh@641 -- # es=1 00:15:18.459 18:02:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:18.459 18:02:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:18.459 18:02:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:18.459 18:02:07 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:18.459 18:02:07 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:18.459 18:02:07 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:18.459 [ 0]:0x2 00:15:18.459 18:02:07 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:18.459 18:02:07 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:18.459 18:02:07 -- target/ns_masking.sh@40 -- # nguid=b41fc141e4074735b810b785a94c65ed 00:15:18.459 18:02:07 -- target/ns_masking.sh@41 -- # [[ b41fc141e4074735b810b785a94c65ed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:18.459 18:02:07 -- target/ns_masking.sh@108 -- # disconnect 00:15:18.459 18:02:07 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:18.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.459 18:02:07 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.025 18:02:07 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:19.025 18:02:07 -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:19.025 18:02:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:19.025 18:02:07 -- nvmf/common.sh@117 -- # sync 00:15:19.025 18:02:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:19.025 18:02:07 -- nvmf/common.sh@120 -- # set +e 00:15:19.025 18:02:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:19.025 18:02:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:19.025 rmmod nvme_tcp 00:15:19.025 rmmod nvme_fabrics 00:15:19.025 rmmod nvme_keyring 00:15:19.025 18:02:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:19.025 18:02:07 -- nvmf/common.sh@124 -- # set -e 00:15:19.025 18:02:07 -- nvmf/common.sh@125 -- # return 0 00:15:19.025 18:02:07 -- nvmf/common.sh@478 -- # '[' -n 3287132 ']' 00:15:19.025 18:02:07 -- nvmf/common.sh@479 -- # killprocess 3287132 00:15:19.025 18:02:07 -- common/autotest_common.sh@936 -- # '[' -z 3287132 ']' 00:15:19.025 18:02:07 -- common/autotest_common.sh@940 -- # kill -0 3287132 00:15:19.025 18:02:07 -- common/autotest_common.sh@941 -- # uname 00:15:19.025 18:02:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:19.025 18:02:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3287132 00:15:19.025 18:02:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:19.025 18:02:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:19.025 18:02:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3287132' 00:15:19.025 killing process with pid 3287132 00:15:19.025 18:02:07 -- common/autotest_common.sh@955 -- # kill 3287132 00:15:19.025 18:02:07 -- common/autotest_common.sh@960 -- # wait 3287132 00:15:19.284 18:02:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:19.284 18:02:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:19.284 18:02:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:19.284 18:02:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.284 18:02:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:19.285 18:02:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.285 18:02:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.285 18:02:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.197 18:02:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:21.197 00:15:21.197 real 0m18.768s 00:15:21.197 user 1m1.293s 00:15:21.197 sys 0m4.340s 00:15:21.197 18:02:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:21.197 18:02:10 -- common/autotest_common.sh@10 -- # set +x 00:15:21.197 ************************************ 00:15:21.197 END TEST nvmf_ns_masking 00:15:21.197 ************************************ 00:15:21.197 18:02:10 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:21.197 18:02:10 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:21.197 18:02:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:21.197 18:02:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:21.197 18:02:10 -- common/autotest_common.sh@10 -- # set +x 00:15:21.456 ************************************ 00:15:21.456 START TEST nvmf_nvme_cli 00:15:21.456 ************************************ 00:15:21.456 18:02:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:21.456 * Looking for test storage... 00:15:21.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.456 18:02:10 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.456 18:02:10 -- nvmf/common.sh@7 -- # uname -s 00:15:21.456 18:02:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.456 18:02:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.456 18:02:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.456 18:02:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.456 18:02:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.456 18:02:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.456 18:02:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.456 18:02:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.456 18:02:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.456 18:02:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.456 18:02:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:21.456 18:02:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:21.456 18:02:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.456 18:02:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.456 18:02:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.456 18:02:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.456 18:02:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.456 18:02:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.456 18:02:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.456 18:02:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.456 18:02:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.456 18:02:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.456 18:02:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.456 18:02:10 -- paths/export.sh@5 -- # export PATH 00:15:21.456 18:02:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.456 18:02:10 -- nvmf/common.sh@47 -- # : 0 00:15:21.456 18:02:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.456 18:02:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.456 18:02:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.456 18:02:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.456 18:02:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.456 18:02:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.456 18:02:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.456 18:02:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.456 18:02:10 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:21.456 18:02:10 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:21.456 18:02:10 -- target/nvme_cli.sh@14 -- # devs=() 00:15:21.456 18:02:10 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:21.457 18:02:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:21.457 18:02:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.457 18:02:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:21.457 18:02:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:21.457 18:02:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:21.457 18:02:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.457 18:02:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.457 18:02:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.457 18:02:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:21.457 18:02:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:21.457 18:02:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:21.457 18:02:10 -- common/autotest_common.sh@10 -- # set +x 00:15:24.037 18:02:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:24.037 18:02:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:24.037 18:02:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:24.037 18:02:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:24.037 18:02:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:24.037 18:02:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:24.037 18:02:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:24.037 18:02:12 -- nvmf/common.sh@295 -- # net_devs=() 00:15:24.037 18:02:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:24.037 18:02:12 -- nvmf/common.sh@296 -- # e810=() 00:15:24.037 18:02:12 -- nvmf/common.sh@296 -- # local -ga e810 00:15:24.037 18:02:12 -- nvmf/common.sh@297 -- # x722=() 00:15:24.037 18:02:12 -- nvmf/common.sh@297 -- # local -ga x722 00:15:24.037 18:02:12 -- nvmf/common.sh@298 -- # mlx=() 00:15:24.037 18:02:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:24.037 18:02:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:24.037 18:02:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:24.037 18:02:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:24.037 18:02:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:24.037 18:02:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:24.037 18:02:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:24.037 18:02:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:24.037 18:02:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:24.037 18:02:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:24.037 18:02:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:24.037 18:02:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:24.037 18:02:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:24.037 18:02:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:24.037 18:02:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:24.037 18:02:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:24.037 18:02:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:24.037 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:24.037 18:02:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:24.037 18:02:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:24.037 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:24.037 18:02:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:24.037 18:02:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:24.037 18:02:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.037 18:02:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:24.037 18:02:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.037 18:02:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:24.037 Found net devices under 0000:84:00.0: cvl_0_0 00:15:24.037 18:02:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.037 18:02:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:24.037 18:02:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.037 18:02:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:24.037 18:02:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.037 18:02:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:24.037 Found net devices under 0000:84:00.1: cvl_0_1 00:15:24.037 18:02:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.037 18:02:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:24.037 18:02:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:24.037 18:02:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:24.037 18:02:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:24.037 18:02:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:24.037 18:02:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:24.037 18:02:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:24.037 18:02:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:24.037 18:02:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:24.037 18:02:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:24.037 18:02:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:24.037 18:02:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:24.037 18:02:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:24.037 18:02:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:24.037 18:02:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:24.037 18:02:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:24.037 18:02:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:24.037 18:02:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:24.037 18:02:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:24.037 18:02:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:24.037 18:02:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:24.037 18:02:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:24.037 18:02:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:24.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:15:24.037 00:15:24.037 --- 10.0.0.2 ping statistics --- 00:15:24.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.037 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:15:24.037 18:02:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:24.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:15:24.037 00:15:24.037 --- 10.0.0.1 ping statistics --- 00:15:24.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.037 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:15:24.037 18:02:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.037 18:02:12 -- nvmf/common.sh@411 -- # return 0 00:15:24.037 18:02:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:24.037 18:02:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.037 18:02:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:24.037 18:02:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.037 18:02:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:24.037 18:02:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:24.037 18:02:12 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:24.037 18:02:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:24.037 18:02:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:24.037 18:02:12 -- common/autotest_common.sh@10 -- # set +x 00:15:24.037 18:02:12 -- nvmf/common.sh@470 -- # nvmfpid=3290970 00:15:24.037 18:02:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:24.037 18:02:12 -- nvmf/common.sh@471 -- # waitforlisten 3290970 00:15:24.037 18:02:12 -- common/autotest_common.sh@817 -- # '[' -z 3290970 ']' 00:15:24.037 18:02:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.037 18:02:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:24.037 18:02:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.037 18:02:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:24.037 18:02:12 -- common/autotest_common.sh@10 -- # set +x 00:15:24.037 [2024-04-15 18:02:12.804324] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:24.037 [2024-04-15 18:02:12.804413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.038 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.038 [2024-04-15 18:02:12.882551] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:24.296 [2024-04-15 18:02:12.980709] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.296 [2024-04-15 18:02:12.980773] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.296 [2024-04-15 18:02:12.980789] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.296 [2024-04-15 18:02:12.980804] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.296 [2024-04-15 18:02:12.980816] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.296 [2024-04-15 18:02:12.980912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.296 [2024-04-15 18:02:12.980970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:24.296 [2024-04-15 18:02:12.981020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:24.296 [2024-04-15 18:02:12.981023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.296 18:02:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:24.296 18:02:13 -- common/autotest_common.sh@850 -- # return 0 00:15:24.296 18:02:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:24.296 18:02:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:24.296 18:02:13 -- common/autotest_common.sh@10 -- # set +x 00:15:24.296 18:02:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.296 18:02:13 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:24.296 18:02:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.296 18:02:13 -- common/autotest_common.sh@10 -- # set +x 00:15:24.296 [2024-04-15 18:02:13.147051] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:24.296 18:02:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.296 18:02:13 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:24.296 18:02:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.296 18:02:13 -- common/autotest_common.sh@10 -- # set +x 00:15:24.296 Malloc0 00:15:24.296 18:02:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.296 18:02:13 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:24.296 18:02:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.296 18:02:13 -- common/autotest_common.sh@10 -- # set +x 00:15:24.296 Malloc1 00:15:24.296 18:02:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.296 18:02:13 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:24.296 18:02:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.296 18:02:13 -- common/autotest_common.sh@10 -- # set +x 00:15:24.296 18:02:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.296 18:02:13 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:24.297 18:02:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.297 18:02:13 -- common/autotest_common.sh@10 -- # set +x 00:15:24.297 18:02:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.297 18:02:13 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:24.297 18:02:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.297 18:02:13 -- common/autotest_common.sh@10 -- # set +x 00:15:24.297 18:02:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.297 18:02:13 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.297 18:02:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.297 18:02:13 -- common/autotest_common.sh@10 -- # set +x 00:15:24.297 [2024-04-15 18:02:13.234645] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.297 18:02:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.297 18:02:13 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:24.297 18:02:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.297 18:02:13 -- common/autotest_common.sh@10 -- # set +x 00:15:24.297 18:02:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.297 18:02:13 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:15:24.556 00:15:24.556 Discovery Log Number of Records 2, Generation counter 2 00:15:24.556 =====Discovery Log Entry 0====== 00:15:24.556 trtype: tcp 00:15:24.556 adrfam: ipv4 00:15:24.556 subtype: current discovery subsystem 00:15:24.556 treq: not required 00:15:24.556 portid: 0 00:15:24.556 trsvcid: 4420 00:15:24.556 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:24.556 traddr: 10.0.0.2 00:15:24.556 eflags: explicit discovery connections, duplicate discovery information 00:15:24.556 sectype: none 00:15:24.556 =====Discovery Log Entry 1====== 00:15:24.556 trtype: tcp 00:15:24.556 adrfam: ipv4 00:15:24.556 subtype: nvme subsystem 00:15:24.556 treq: not required 00:15:24.556 portid: 0 00:15:24.556 trsvcid: 4420 00:15:24.556 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:24.556 traddr: 10.0.0.2 00:15:24.556 eflags: none 00:15:24.556 sectype: none 00:15:24.556 18:02:13 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:24.556 18:02:13 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:24.556 18:02:13 -- nvmf/common.sh@511 -- # local dev _ 00:15:24.556 18:02:13 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:24.556 18:02:13 -- nvmf/common.sh@510 -- # nvme list 00:15:24.556 18:02:13 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:15:24.556 18:02:13 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:24.556 18:02:13 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:15:24.556 18:02:13 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:24.556 18:02:13 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:24.556 18:02:13 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:25.121 18:02:13 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:25.121 18:02:13 -- common/autotest_common.sh@1184 -- # local i=0 00:15:25.121 18:02:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:25.121 18:02:13 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:15:25.121 18:02:13 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:15:25.121 18:02:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:27.024 18:02:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:27.024 18:02:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:27.024 18:02:15 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:27.024 18:02:15 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:15:27.024 18:02:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:27.024 18:02:15 -- common/autotest_common.sh@1194 -- # return 0 00:15:27.284 18:02:15 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:27.284 18:02:15 -- nvmf/common.sh@511 -- # local dev _ 00:15:27.284 18:02:15 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:27.284 18:02:15 -- nvmf/common.sh@510 -- # nvme list 00:15:27.284 18:02:16 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:15:27.284 18:02:16 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:27.284 18:02:16 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:15:27.284 18:02:16 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:27.284 18:02:16 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:27.284 18:02:16 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:15:27.284 18:02:16 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:27.284 18:02:16 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:27.284 18:02:16 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:15:27.284 18:02:16 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:27.284 18:02:16 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:27.284 /dev/nvme0n1 ]] 00:15:27.284 18:02:16 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:27.284 18:02:16 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:27.284 18:02:16 -- nvmf/common.sh@511 -- # local dev _ 00:15:27.284 18:02:16 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:27.284 18:02:16 -- nvmf/common.sh@510 -- # nvme list 00:15:27.543 18:02:16 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:15:27.543 18:02:16 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:27.543 18:02:16 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:15:27.543 18:02:16 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:27.543 18:02:16 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:27.543 18:02:16 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:15:27.543 18:02:16 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:27.543 18:02:16 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:27.543 18:02:16 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:15:27.543 18:02:16 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:27.543 18:02:16 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:27.543 18:02:16 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:27.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.801 18:02:16 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:27.801 18:02:16 -- common/autotest_common.sh@1205 -- # local i=0 00:15:27.801 18:02:16 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:27.801 18:02:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.801 18:02:16 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:27.801 18:02:16 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.801 18:02:16 -- common/autotest_common.sh@1217 -- # return 0 00:15:27.801 18:02:16 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:27.801 18:02:16 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.801 18:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.801 18:02:16 -- common/autotest_common.sh@10 -- # set +x 00:15:27.801 18:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.801 18:02:16 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:27.801 18:02:16 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:27.801 18:02:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:27.801 18:02:16 -- nvmf/common.sh@117 -- # sync 00:15:27.801 18:02:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:27.801 18:02:16 -- nvmf/common.sh@120 -- # set +e 00:15:27.801 18:02:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.801 18:02:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:27.801 rmmod nvme_tcp 00:15:27.801 rmmod nvme_fabrics 00:15:27.801 rmmod nvme_keyring 00:15:27.801 18:02:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:27.801 18:02:16 -- nvmf/common.sh@124 -- # set -e 00:15:27.801 18:02:16 -- nvmf/common.sh@125 -- # return 0 00:15:27.801 18:02:16 -- nvmf/common.sh@478 -- # '[' -n 3290970 ']' 00:15:27.801 18:02:16 -- nvmf/common.sh@479 -- # killprocess 3290970 00:15:27.801 18:02:16 -- common/autotest_common.sh@936 -- # '[' -z 3290970 ']' 00:15:27.801 18:02:16 -- common/autotest_common.sh@940 -- # kill -0 3290970 00:15:27.801 18:02:16 -- common/autotest_common.sh@941 -- # uname 00:15:27.801 18:02:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:27.801 18:02:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3290970 00:15:27.801 18:02:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:27.801 18:02:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:27.801 18:02:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3290970' 00:15:27.801 killing process with pid 3290970 00:15:27.801 18:02:16 -- common/autotest_common.sh@955 -- # kill 3290970 00:15:27.801 18:02:16 -- common/autotest_common.sh@960 -- # wait 3290970 00:15:28.059 18:02:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:28.059 18:02:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:28.059 18:02:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:28.059 18:02:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:28.059 18:02:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:28.059 18:02:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.059 18:02:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.059 18:02:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.598 18:02:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:30.598 00:15:30.598 real 0m8.697s 00:15:30.598 user 0m16.037s 00:15:30.598 sys 0m2.451s 00:15:30.598 18:02:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:30.598 18:02:18 -- common/autotest_common.sh@10 -- # set +x 00:15:30.598 ************************************ 00:15:30.598 END TEST nvmf_nvme_cli 00:15:30.598 ************************************ 00:15:30.598 18:02:18 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:30.598 18:02:18 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:30.598 18:02:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:30.598 18:02:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:30.598 18:02:18 -- common/autotest_common.sh@10 -- # set +x 00:15:30.598 ************************************ 00:15:30.598 START TEST nvmf_vfio_user 00:15:30.598 ************************************ 00:15:30.598 18:02:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:30.598 * Looking for test storage... 00:15:30.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:30.598 18:02:19 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:30.598 18:02:19 -- nvmf/common.sh@7 -- # uname -s 00:15:30.598 18:02:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.598 18:02:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.598 18:02:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.598 18:02:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.598 18:02:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.598 18:02:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.598 18:02:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.598 18:02:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.598 18:02:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.598 18:02:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.598 18:02:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:30.598 18:02:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:30.598 18:02:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.598 18:02:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.598 18:02:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:30.598 18:02:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.598 18:02:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:30.598 18:02:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.598 18:02:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.598 18:02:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.598 18:02:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.598 18:02:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.598 18:02:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.598 18:02:19 -- paths/export.sh@5 -- # export PATH 00:15:30.598 18:02:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.598 18:02:19 -- nvmf/common.sh@47 -- # : 0 00:15:30.598 18:02:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.598 18:02:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.598 18:02:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.598 18:02:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.598 18:02:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.598 18:02:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.598 18:02:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.598 18:02:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.598 18:02:19 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:30.598 18:02:19 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:30.598 18:02:19 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:30.598 18:02:19 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:30.598 18:02:19 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:30.598 18:02:19 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:30.598 18:02:19 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:30.598 18:02:19 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:30.598 18:02:19 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:30.598 18:02:19 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:30.598 18:02:19 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3291809 00:15:30.598 18:02:19 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:30.599 18:02:19 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3291809' 00:15:30.599 Process pid: 3291809 00:15:30.599 18:02:19 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:30.599 18:02:19 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3291809 00:15:30.599 18:02:19 -- common/autotest_common.sh@817 -- # '[' -z 3291809 ']' 00:15:30.599 18:02:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.599 18:02:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:30.599 18:02:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.599 18:02:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:30.599 18:02:19 -- common/autotest_common.sh@10 -- # set +x 00:15:30.599 [2024-04-15 18:02:19.187773] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:30.599 [2024-04-15 18:02:19.187863] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.599 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.599 [2024-04-15 18:02:19.263756] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:30.599 [2024-04-15 18:02:19.357751] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.599 [2024-04-15 18:02:19.357815] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.599 [2024-04-15 18:02:19.357832] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.599 [2024-04-15 18:02:19.357848] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.599 [2024-04-15 18:02:19.357861] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.599 [2024-04-15 18:02:19.357928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.599 [2024-04-15 18:02:19.357980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.599 [2024-04-15 18:02:19.358037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.599 [2024-04-15 18:02:19.358033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.599 18:02:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:30.599 18:02:19 -- common/autotest_common.sh@850 -- # return 0 00:15:30.599 18:02:19 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:31.538 18:02:20 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:32.104 18:02:20 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:32.104 18:02:20 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:32.104 18:02:20 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:32.104 18:02:20 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:32.104 18:02:20 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:32.362 Malloc1 00:15:32.362 18:02:21 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:32.620 18:02:21 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:33.186 18:02:21 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:33.450 18:02:22 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:33.450 18:02:22 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:33.450 18:02:22 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:33.707 Malloc2 00:15:33.707 18:02:22 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:33.964 18:02:22 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:34.223 18:02:23 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:34.792 18:02:23 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:34.792 18:02:23 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:34.792 18:02:23 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:34.792 18:02:23 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:34.792 18:02:23 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:34.792 18:02:23 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:34.792 [2024-04-15 18:02:23.498405] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:34.792 [2024-04-15 18:02:23.498459] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292332 ] 00:15:34.792 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.792 [2024-04-15 18:02:23.535318] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:34.792 [2024-04-15 18:02:23.544436] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:34.792 [2024-04-15 18:02:23.544465] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc667f34000 00:15:34.792 [2024-04-15 18:02:23.545427] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:34.792 [2024-04-15 18:02:23.546442] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:34.792 [2024-04-15 18:02:23.547431] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:34.792 [2024-04-15 18:02:23.548441] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:34.792 [2024-04-15 18:02:23.549443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:34.792 [2024-04-15 18:02:23.550465] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:34.792 [2024-04-15 18:02:23.551471] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:34.792 [2024-04-15 18:02:23.552475] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:34.792 [2024-04-15 18:02:23.553480] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:34.792 [2024-04-15 18:02:23.553500] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc666cea000 00:15:34.792 [2024-04-15 18:02:23.554625] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:34.792 [2024-04-15 18:02:23.570282] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:34.792 [2024-04-15 18:02:23.570325] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:34.792 [2024-04-15 18:02:23.572590] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:34.792 [2024-04-15 18:02:23.572647] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:34.792 [2024-04-15 18:02:23.572748] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:34.792 [2024-04-15 18:02:23.572789] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:34.792 [2024-04-15 18:02:23.572801] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:34.792 [2024-04-15 18:02:23.573593] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:34.792 [2024-04-15 18:02:23.573613] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:34.792 [2024-04-15 18:02:23.573625] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:34.792 [2024-04-15 18:02:23.574600] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:34.792 [2024-04-15 18:02:23.574620] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:34.792 [2024-04-15 18:02:23.574634] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:34.792 [2024-04-15 18:02:23.575601] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:34.792 [2024-04-15 18:02:23.575621] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:34.792 [2024-04-15 18:02:23.576608] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:34.792 [2024-04-15 18:02:23.576627] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:34.792 [2024-04-15 18:02:23.576636] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:34.792 [2024-04-15 18:02:23.576648] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:34.792 [2024-04-15 18:02:23.576759] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:34.792 [2024-04-15 18:02:23.576767] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:34.792 [2024-04-15 18:02:23.576777] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:34.793 [2024-04-15 18:02:23.577617] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:34.793 [2024-04-15 18:02:23.578619] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:34.793 [2024-04-15 18:02:23.579625] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:34.793 [2024-04-15 18:02:23.580617] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:34.793 [2024-04-15 18:02:23.580711] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:34.793 [2024-04-15 18:02:23.581637] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:34.793 [2024-04-15 18:02:23.581654] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:34.793 [2024-04-15 18:02:23.581664] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.581692] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:34.793 [2024-04-15 18:02:23.581707] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.581738] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:34.793 [2024-04-15 18:02:23.581749] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:34.793 [2024-04-15 18:02:23.581773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:34.793 [2024-04-15 18:02:23.581826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:34.793 [2024-04-15 18:02:23.581846] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:34.793 [2024-04-15 18:02:23.581855] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:34.793 [2024-04-15 18:02:23.581863] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:34.793 [2024-04-15 18:02:23.581872] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:34.793 [2024-04-15 18:02:23.581880] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:34.793 [2024-04-15 18:02:23.581889] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:34.793 [2024-04-15 18:02:23.581897] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.581910] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.581926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:34.793 [2024-04-15 18:02:23.581943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:34.793 [2024-04-15 18:02:23.581965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.793 [2024-04-15 18:02:23.581978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.793 [2024-04-15 18:02:23.581990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.793 [2024-04-15 18:02:23.582003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.793 [2024-04-15 18:02:23.582011] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.582027] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.582056] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:34.793 [2024-04-15 18:02:23.582079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:34.793 [2024-04-15 18:02:23.582092] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:34.793 [2024-04-15 18:02:23.582116] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.582133] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.582145] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.582160] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:34.793 [2024-04-15 18:02:23.582172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:34.793 [2024-04-15 18:02:23.582228] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.582245] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.582259] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:34.793 [2024-04-15 18:02:23.582268] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:34.793 [2024-04-15 18:02:23.582279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:34.793 [2024-04-15 18:02:23.582296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:34.793 [2024-04-15 18:02:23.582315] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:34.793 [2024-04-15 18:02:23.582332] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.582348] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.582375] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:34.793 [2024-04-15 18:02:23.582385] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:34.793 [2024-04-15 18:02:23.582395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:34.793 [2024-04-15 18:02:23.582418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:34.793 [2024-04-15 18:02:23.582459] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.582474] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.582486] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:34.793 [2024-04-15 18:02:23.582495] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:34.793 [2024-04-15 18:02:23.582505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:34.793 [2024-04-15 18:02:23.582518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:34.793 [2024-04-15 18:02:23.582534] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.582545] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.582564] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.582576] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.582585] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.582595] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:34.793 [2024-04-15 18:02:23.582603] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:34.793 [2024-04-15 18:02:23.582612] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:34.793 [2024-04-15 18:02:23.582639] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:34.793 [2024-04-15 18:02:23.582658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:34.793 [2024-04-15 18:02:23.582678] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:34.793 [2024-04-15 18:02:23.582693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:34.793 [2024-04-15 18:02:23.582709] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:34.793 [2024-04-15 18:02:23.582720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:34.793 [2024-04-15 18:02:23.582736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:34.793 [2024-04-15 18:02:23.582748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:34.793 [2024-04-15 18:02:23.582765] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:34.793 [2024-04-15 18:02:23.582774] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:34.793 [2024-04-15 18:02:23.582781] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:34.793 [2024-04-15 18:02:23.582788] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:34.793 [2024-04-15 18:02:23.582797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:34.793 [2024-04-15 18:02:23.582809] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:34.793 [2024-04-15 18:02:23.582817] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:34.794 [2024-04-15 18:02:23.582826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:34.794 [2024-04-15 18:02:23.582837] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:34.794 [2024-04-15 18:02:23.582846] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:34.794 [2024-04-15 18:02:23.582855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:34.794 [2024-04-15 18:02:23.582867] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:34.794 [2024-04-15 18:02:23.582875] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:34.794 [2024-04-15 18:02:23.582888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:34.794 [2024-04-15 18:02:23.582900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:34.794 [2024-04-15 18:02:23.582921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:34.794 [2024-04-15 18:02:23.582936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:34.794 [2024-04-15 18:02:23.582948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:34.794 ===================================================== 00:15:34.794 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:34.794 ===================================================== 00:15:34.794 Controller Capabilities/Features 00:15:34.794 ================================ 00:15:34.794 Vendor ID: 4e58 00:15:34.794 Subsystem Vendor ID: 4e58 00:15:34.794 Serial Number: SPDK1 00:15:34.794 Model Number: SPDK bdev Controller 00:15:34.794 Firmware Version: 24.05 00:15:34.794 Recommended Arb Burst: 6 00:15:34.794 IEEE OUI Identifier: 8d 6b 50 00:15:34.794 Multi-path I/O 00:15:34.794 May have multiple subsystem ports: Yes 00:15:34.794 May have multiple controllers: Yes 00:15:34.794 Associated with SR-IOV VF: No 00:15:34.794 Max Data Transfer Size: 131072 00:15:34.794 Max Number of Namespaces: 32 00:15:34.794 Max Number of I/O Queues: 127 00:15:34.794 NVMe Specification Version (VS): 1.3 00:15:34.794 NVMe Specification Version (Identify): 1.3 00:15:34.794 Maximum Queue Entries: 256 00:15:34.794 Contiguous Queues Required: Yes 00:15:34.794 Arbitration Mechanisms Supported 00:15:34.794 Weighted Round Robin: Not Supported 00:15:34.794 Vendor Specific: Not Supported 00:15:34.794 Reset Timeout: 15000 ms 00:15:34.794 Doorbell Stride: 4 bytes 00:15:34.794 NVM Subsystem Reset: Not Supported 00:15:34.794 Command Sets Supported 00:15:34.794 NVM Command Set: Supported 00:15:34.794 Boot Partition: Not Supported 00:15:34.794 Memory Page Size Minimum: 4096 bytes 00:15:34.794 Memory Page Size Maximum: 4096 bytes 00:15:34.794 Persistent Memory Region: Not Supported 00:15:34.794 Optional Asynchronous Events Supported 00:15:34.794 Namespace Attribute Notices: Supported 00:15:34.794 Firmware Activation Notices: Not Supported 00:15:34.794 ANA Change Notices: Not Supported 00:15:34.794 PLE Aggregate Log Change Notices: Not Supported 00:15:34.794 LBA Status Info Alert Notices: Not Supported 00:15:34.794 EGE Aggregate Log Change Notices: Not Supported 00:15:34.794 Normal NVM Subsystem Shutdown event: Not Supported 00:15:34.794 Zone Descriptor Change Notices: Not Supported 00:15:34.794 Discovery Log Change Notices: Not Supported 00:15:34.794 Controller Attributes 00:15:34.794 128-bit Host Identifier: Supported 00:15:34.794 Non-Operational Permissive Mode: Not Supported 00:15:34.794 NVM Sets: Not Supported 00:15:34.794 Read Recovery Levels: Not Supported 00:15:34.794 Endurance Groups: Not Supported 00:15:34.794 Predictable Latency Mode: Not Supported 00:15:34.794 Traffic Based Keep ALive: Not Supported 00:15:34.794 Namespace Granularity: Not Supported 00:15:34.794 SQ Associations: Not Supported 00:15:34.794 UUID List: Not Supported 00:15:34.794 Multi-Domain Subsystem: Not Supported 00:15:34.794 Fixed Capacity Management: Not Supported 00:15:34.794 Variable Capacity Management: Not Supported 00:15:34.794 Delete Endurance Group: Not Supported 00:15:34.794 Delete NVM Set: Not Supported 00:15:34.794 Extended LBA Formats Supported: Not Supported 00:15:34.794 Flexible Data Placement Supported: Not Supported 00:15:34.794 00:15:34.794 Controller Memory Buffer Support 00:15:34.794 ================================ 00:15:34.794 Supported: No 00:15:34.794 00:15:34.794 Persistent Memory Region Support 00:15:34.794 ================================ 00:15:34.794 Supported: No 00:15:34.794 00:15:34.794 Admin Command Set Attributes 00:15:34.794 ============================ 00:15:34.794 Security Send/Receive: Not Supported 00:15:34.794 Format NVM: Not Supported 00:15:34.794 Firmware Activate/Download: Not Supported 00:15:34.794 Namespace Management: Not Supported 00:15:34.794 Device Self-Test: Not Supported 00:15:34.794 Directives: Not Supported 00:15:34.794 NVMe-MI: Not Supported 00:15:34.794 Virtualization Management: Not Supported 00:15:34.794 Doorbell Buffer Config: Not Supported 00:15:34.794 Get LBA Status Capability: Not Supported 00:15:34.794 Command & Feature Lockdown Capability: Not Supported 00:15:34.794 Abort Command Limit: 4 00:15:34.794 Async Event Request Limit: 4 00:15:34.794 Number of Firmware Slots: N/A 00:15:34.794 Firmware Slot 1 Read-Only: N/A 00:15:34.794 Firmware Activation Without Reset: N/A 00:15:34.794 Multiple Update Detection Support: N/A 00:15:34.794 Firmware Update Granularity: No Information Provided 00:15:34.794 Per-Namespace SMART Log: No 00:15:34.794 Asymmetric Namespace Access Log Page: Not Supported 00:15:34.794 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:34.794 Command Effects Log Page: Supported 00:15:34.794 Get Log Page Extended Data: Supported 00:15:34.794 Telemetry Log Pages: Not Supported 00:15:34.794 Persistent Event Log Pages: Not Supported 00:15:34.794 Supported Log Pages Log Page: May Support 00:15:34.794 Commands Supported & Effects Log Page: Not Supported 00:15:34.794 Feature Identifiers & Effects Log Page:May Support 00:15:34.794 NVMe-MI Commands & Effects Log Page: May Support 00:15:34.794 Data Area 4 for Telemetry Log: Not Supported 00:15:34.794 Error Log Page Entries Supported: 128 00:15:34.794 Keep Alive: Supported 00:15:34.794 Keep Alive Granularity: 10000 ms 00:15:34.794 00:15:34.794 NVM Command Set Attributes 00:15:34.794 ========================== 00:15:34.794 Submission Queue Entry Size 00:15:34.794 Max: 64 00:15:34.794 Min: 64 00:15:34.794 Completion Queue Entry Size 00:15:34.794 Max: 16 00:15:34.794 Min: 16 00:15:34.794 Number of Namespaces: 32 00:15:34.794 Compare Command: Supported 00:15:34.794 Write Uncorrectable Command: Not Supported 00:15:34.794 Dataset Management Command: Supported 00:15:34.794 Write Zeroes Command: Supported 00:15:34.794 Set Features Save Field: Not Supported 00:15:34.794 Reservations: Not Supported 00:15:34.794 Timestamp: Not Supported 00:15:34.794 Copy: Supported 00:15:34.794 Volatile Write Cache: Present 00:15:34.794 Atomic Write Unit (Normal): 1 00:15:34.794 Atomic Write Unit (PFail): 1 00:15:34.794 Atomic Compare & Write Unit: 1 00:15:34.794 Fused Compare & Write: Supported 00:15:34.794 Scatter-Gather List 00:15:34.794 SGL Command Set: Supported (Dword aligned) 00:15:34.794 SGL Keyed: Not Supported 00:15:34.794 SGL Bit Bucket Descriptor: Not Supported 00:15:34.794 SGL Metadata Pointer: Not Supported 00:15:34.794 Oversized SGL: Not Supported 00:15:34.794 SGL Metadata Address: Not Supported 00:15:34.794 SGL Offset: Not Supported 00:15:34.794 Transport SGL Data Block: Not Supported 00:15:34.794 Replay Protected Memory Block: Not Supported 00:15:34.794 00:15:34.794 Firmware Slot Information 00:15:34.794 ========================= 00:15:34.794 Active slot: 1 00:15:34.794 Slot 1 Firmware Revision: 24.05 00:15:34.794 00:15:34.794 00:15:34.794 Commands Supported and Effects 00:15:34.794 ============================== 00:15:34.794 Admin Commands 00:15:34.794 -------------- 00:15:34.794 Get Log Page (02h): Supported 00:15:34.794 Identify (06h): Supported 00:15:34.795 Abort (08h): Supported 00:15:34.795 Set Features (09h): Supported 00:15:34.795 Get Features (0Ah): Supported 00:15:34.795 Asynchronous Event Request (0Ch): Supported 00:15:34.795 Keep Alive (18h): Supported 00:15:34.795 I/O Commands 00:15:34.795 ------------ 00:15:34.795 Flush (00h): Supported LBA-Change 00:15:34.795 Write (01h): Supported LBA-Change 00:15:34.795 Read (02h): Supported 00:15:34.795 Compare (05h): Supported 00:15:34.795 Write Zeroes (08h): Supported LBA-Change 00:15:34.795 Dataset Management (09h): Supported LBA-Change 00:15:34.795 Copy (19h): Supported LBA-Change 00:15:34.795 Unknown (79h): Supported LBA-Change 00:15:34.795 Unknown (7Ah): Supported 00:15:34.795 00:15:34.795 Error Log 00:15:34.795 ========= 00:15:34.795 00:15:34.795 Arbitration 00:15:34.795 =========== 00:15:34.795 Arbitration Burst: 1 00:15:34.795 00:15:34.795 Power Management 00:15:34.795 ================ 00:15:34.795 Number of Power States: 1 00:15:34.795 Current Power State: Power State #0 00:15:34.795 Power State #0: 00:15:34.795 Max Power: 0.00 W 00:15:34.795 Non-Operational State: Operational 00:15:34.795 Entry Latency: Not Reported 00:15:34.795 Exit Latency: Not Reported 00:15:34.795 Relative Read Throughput: 0 00:15:34.795 Relative Read Latency: 0 00:15:34.795 Relative Write Throughput: 0 00:15:34.795 Relative Write Latency: 0 00:15:34.795 Idle Power: Not Reported 00:15:34.795 Active Power: Not Reported 00:15:34.795 Non-Operational Permissive Mode: Not Supported 00:15:34.795 00:15:34.795 Health Information 00:15:34.795 ================== 00:15:34.795 Critical Warnings: 00:15:34.795 Available Spare Space: OK 00:15:34.795 Temperature: OK 00:15:34.795 Device Reliability: OK 00:15:34.795 Read Only: No 00:15:34.795 Volatile Memory Backup: OK 00:15:34.795 Current Temperature: 0 Kelvin (-2[2024-04-15 18:02:23.583116] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:34.795 [2024-04-15 18:02:23.583134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:34.795 [2024-04-15 18:02:23.583185] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:34.795 [2024-04-15 18:02:23.583205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.795 [2024-04-15 18:02:23.583217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.795 [2024-04-15 18:02:23.583227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.795 [2024-04-15 18:02:23.583237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.795 [2024-04-15 18:02:23.586070] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:34.795 [2024-04-15 18:02:23.586094] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:34.795 [2024-04-15 18:02:23.586659] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:34.795 [2024-04-15 18:02:23.586730] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:34.795 [2024-04-15 18:02:23.586744] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:34.795 [2024-04-15 18:02:23.587671] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:34.795 [2024-04-15 18:02:23.587694] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:34.795 [2024-04-15 18:02:23.587751] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:34.795 [2024-04-15 18:02:23.591069] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:34.795 73 Celsius) 00:15:34.795 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:34.795 Available Spare: 0% 00:15:34.795 Available Spare Threshold: 0% 00:15:34.795 Life Percentage Used: 0% 00:15:34.795 Data Units Read: 0 00:15:34.795 Data Units Written: 0 00:15:34.795 Host Read Commands: 0 00:15:34.795 Host Write Commands: 0 00:15:34.795 Controller Busy Time: 0 minutes 00:15:34.795 Power Cycles: 0 00:15:34.795 Power On Hours: 0 hours 00:15:34.795 Unsafe Shutdowns: 0 00:15:34.795 Unrecoverable Media Errors: 0 00:15:34.795 Lifetime Error Log Entries: 0 00:15:34.795 Warning Temperature Time: 0 minutes 00:15:34.795 Critical Temperature Time: 0 minutes 00:15:34.795 00:15:34.795 Number of Queues 00:15:34.795 ================ 00:15:34.795 Number of I/O Submission Queues: 127 00:15:34.795 Number of I/O Completion Queues: 127 00:15:34.795 00:15:34.795 Active Namespaces 00:15:34.795 ================= 00:15:34.795 Namespace ID:1 00:15:34.795 Error Recovery Timeout: Unlimited 00:15:34.795 Command Set Identifier: NVM (00h) 00:15:34.795 Deallocate: Supported 00:15:34.795 Deallocated/Unwritten Error: Not Supported 00:15:34.795 Deallocated Read Value: Unknown 00:15:34.795 Deallocate in Write Zeroes: Not Supported 00:15:34.795 Deallocated Guard Field: 0xFFFF 00:15:34.795 Flush: Supported 00:15:34.795 Reservation: Supported 00:15:34.795 Namespace Sharing Capabilities: Multiple Controllers 00:15:34.795 Size (in LBAs): 131072 (0GiB) 00:15:34.795 Capacity (in LBAs): 131072 (0GiB) 00:15:34.795 Utilization (in LBAs): 131072 (0GiB) 00:15:34.795 NGUID: B1E8891739D34F4EAD19DBCE9FC1B417 00:15:34.795 UUID: b1e88917-39d3-4f4e-ad19-dbce9fc1b417 00:15:34.795 Thin Provisioning: Not Supported 00:15:34.795 Per-NS Atomic Units: Yes 00:15:34.795 Atomic Boundary Size (Normal): 0 00:15:34.795 Atomic Boundary Size (PFail): 0 00:15:34.795 Atomic Boundary Offset: 0 00:15:34.795 Maximum Single Source Range Length: 65535 00:15:34.795 Maximum Copy Length: 65535 00:15:34.795 Maximum Source Range Count: 1 00:15:34.795 NGUID/EUI64 Never Reused: No 00:15:34.795 Namespace Write Protected: No 00:15:34.795 Number of LBA Formats: 1 00:15:34.795 Current LBA Format: LBA Format #00 00:15:34.795 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:34.795 00:15:34.795 18:02:23 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:34.795 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.055 [2024-04-15 18:02:23.835977] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:40.325 [2024-04-15 18:02:28.858838] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:40.325 Initializing NVMe Controllers 00:15:40.325 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:40.325 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:40.325 Initialization complete. Launching workers. 00:15:40.325 ======================================================== 00:15:40.325 Latency(us) 00:15:40.325 Device Information : IOPS MiB/s Average min max 00:15:40.325 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33254.59 129.90 3848.39 1186.78 8289.75 00:15:40.325 ======================================================== 00:15:40.325 Total : 33254.59 129.90 3848.39 1186.78 8289.75 00:15:40.325 00:15:40.325 18:02:28 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:40.325 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.325 [2024-04-15 18:02:29.110021] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:45.616 [2024-04-15 18:02:34.150151] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:45.616 Initializing NVMe Controllers 00:15:45.616 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:45.616 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:45.617 Initialization complete. Launching workers. 00:15:45.617 ======================================================== 00:15:45.617 Latency(us) 00:15:45.617 Device Information : IOPS MiB/s Average min max 00:15:45.617 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15954.20 62.32 8029.83 6025.31 15973.67 00:15:45.617 ======================================================== 00:15:45.617 Total : 15954.20 62.32 8029.83 6025.31 15973.67 00:15:45.617 00:15:45.617 18:02:34 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:45.617 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.617 [2024-04-15 18:02:34.366172] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:50.887 [2024-04-15 18:02:39.440358] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:50.887 Initializing NVMe Controllers 00:15:50.887 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:50.887 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:50.887 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:50.887 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:50.887 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:50.887 Initialization complete. Launching workers. 00:15:50.887 Starting thread on core 2 00:15:50.887 Starting thread on core 3 00:15:50.887 Starting thread on core 1 00:15:50.887 18:02:39 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:50.887 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.887 [2024-04-15 18:02:39.759547] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:54.169 [2024-04-15 18:02:42.827098] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:54.169 Initializing NVMe Controllers 00:15:54.169 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:54.169 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:54.169 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:54.169 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:54.169 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:54.169 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:54.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:54.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:54.169 Initialization complete. Launching workers. 00:15:54.169 Starting thread on core 1 with urgent priority queue 00:15:54.169 Starting thread on core 2 with urgent priority queue 00:15:54.169 Starting thread on core 3 with urgent priority queue 00:15:54.169 Starting thread on core 0 with urgent priority queue 00:15:54.169 SPDK bdev Controller (SPDK1 ) core 0: 3138.33 IO/s 31.86 secs/100000 ios 00:15:54.169 SPDK bdev Controller (SPDK1 ) core 1: 3384.00 IO/s 29.55 secs/100000 ios 00:15:54.169 SPDK bdev Controller (SPDK1 ) core 2: 3383.33 IO/s 29.56 secs/100000 ios 00:15:54.169 SPDK bdev Controller (SPDK1 ) core 3: 3492.67 IO/s 28.63 secs/100000 ios 00:15:54.169 ======================================================== 00:15:54.169 00:15:54.169 18:02:42 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:54.169 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.427 [2024-04-15 18:02:43.151573] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:54.427 [2024-04-15 18:02:43.185152] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:54.427 Initializing NVMe Controllers 00:15:54.427 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:54.427 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:54.427 Namespace ID: 1 size: 0GB 00:15:54.427 Initialization complete. 00:15:54.427 INFO: using host memory buffer for IO 00:15:54.427 Hello world! 00:15:54.427 18:02:43 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:54.427 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.686 [2024-04-15 18:02:43.484545] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:55.621 Initializing NVMe Controllers 00:15:55.621 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:55.621 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:55.621 Initialization complete. Launching workers. 00:15:55.621 submit (in ns) avg, min, max = 7946.3, 3473.3, 4017975.6 00:15:55.621 complete (in ns) avg, min, max = 27293.6, 2043.3, 4015462.2 00:15:55.621 00:15:55.621 Submit histogram 00:15:55.621 ================ 00:15:55.621 Range in us Cumulative Count 00:15:55.621 3.461 - 3.484: 0.0153% ( 2) 00:15:55.621 3.484 - 3.508: 0.0764% ( 8) 00:15:55.621 3.508 - 3.532: 0.5731% ( 65) 00:15:55.621 3.532 - 3.556: 2.2925% ( 225) 00:15:55.621 3.556 - 3.579: 7.0686% ( 625) 00:15:55.621 3.579 - 3.603: 13.0521% ( 783) 00:15:55.621 3.603 - 3.627: 21.7331% ( 1136) 00:15:55.621 3.627 - 3.650: 32.0648% ( 1352) 00:15:55.621 3.650 - 3.674: 42.1825% ( 1324) 00:15:55.621 3.674 - 3.698: 49.6714% ( 980) 00:15:55.621 3.698 - 3.721: 54.7608% ( 666) 00:15:55.621 3.721 - 3.745: 58.6428% ( 508) 00:15:55.621 3.745 - 3.769: 61.6842% ( 398) 00:15:55.621 3.769 - 3.793: 65.3676% ( 482) 00:15:55.621 3.793 - 3.816: 68.3708% ( 393) 00:15:55.621 3.816 - 3.840: 71.7790% ( 446) 00:15:55.621 3.840 - 3.864: 75.9208% ( 542) 00:15:55.621 3.864 - 3.887: 79.9098% ( 522) 00:15:55.621 3.887 - 3.911: 83.7536% ( 503) 00:15:55.621 3.911 - 3.935: 86.4282% ( 350) 00:15:55.621 3.935 - 3.959: 88.0177% ( 208) 00:15:55.621 3.959 - 3.982: 89.5766% ( 204) 00:15:55.621 3.982 - 4.006: 90.8299% ( 164) 00:15:55.621 4.006 - 4.030: 91.8310% ( 131) 00:15:55.621 4.030 - 4.053: 92.5722% ( 97) 00:15:55.621 4.053 - 4.077: 93.3670% ( 104) 00:15:55.621 4.077 - 4.101: 94.2381% ( 114) 00:15:55.621 4.101 - 4.124: 94.9412% ( 92) 00:15:55.621 4.124 - 4.148: 95.3767% ( 57) 00:15:55.621 4.148 - 4.172: 95.7359% ( 47) 00:15:55.621 4.172 - 4.196: 95.9499% ( 28) 00:15:55.621 4.196 - 4.219: 96.0874% ( 18) 00:15:55.621 4.219 - 4.243: 96.2097% ( 16) 00:15:55.621 4.243 - 4.267: 96.3625% ( 20) 00:15:55.621 4.267 - 4.290: 96.4542% ( 12) 00:15:55.621 4.290 - 4.314: 96.5612% ( 14) 00:15:55.621 4.314 - 4.338: 96.6376% ( 10) 00:15:55.621 4.338 - 4.361: 96.7446% ( 14) 00:15:55.621 4.361 - 4.385: 96.8210% ( 10) 00:15:55.621 4.385 - 4.409: 96.8974% ( 10) 00:15:55.621 4.409 - 4.433: 96.9280% ( 4) 00:15:55.621 4.433 - 4.456: 96.9433% ( 2) 00:15:55.621 4.480 - 4.504: 96.9662% ( 3) 00:15:55.621 4.504 - 4.527: 96.9891% ( 3) 00:15:55.621 4.527 - 4.551: 97.0044% ( 2) 00:15:55.621 4.575 - 4.599: 97.0121% ( 1) 00:15:55.621 4.622 - 4.646: 97.0197% ( 1) 00:15:55.621 4.646 - 4.670: 97.0426% ( 3) 00:15:55.621 4.670 - 4.693: 97.1038% ( 8) 00:15:55.621 4.693 - 4.717: 97.1955% ( 12) 00:15:55.621 4.717 - 4.741: 97.2260% ( 4) 00:15:55.621 4.741 - 4.764: 97.2719% ( 6) 00:15:55.621 4.764 - 4.788: 97.3177% ( 6) 00:15:55.621 4.788 - 4.812: 97.4018% ( 11) 00:15:55.621 4.812 - 4.836: 97.4247% ( 3) 00:15:55.621 4.836 - 4.859: 97.5394% ( 15) 00:15:55.621 4.859 - 4.883: 97.5776% ( 5) 00:15:55.621 4.883 - 4.907: 97.6387% ( 8) 00:15:55.621 4.907 - 4.930: 97.6845% ( 6) 00:15:55.621 4.930 - 4.954: 97.6922% ( 1) 00:15:55.621 4.954 - 4.978: 97.7533% ( 8) 00:15:55.621 4.978 - 5.001: 97.7839% ( 4) 00:15:55.621 5.001 - 5.025: 97.8068% ( 3) 00:15:55.621 5.025 - 5.049: 97.8221% ( 2) 00:15:55.621 5.049 - 5.073: 97.8374% ( 2) 00:15:55.621 5.073 - 5.096: 97.8603% ( 3) 00:15:55.621 5.096 - 5.120: 97.8680% ( 1) 00:15:55.621 5.144 - 5.167: 97.8909% ( 3) 00:15:55.621 5.167 - 5.191: 97.8985% ( 1) 00:15:55.621 5.191 - 5.215: 97.9138% ( 2) 00:15:55.621 5.215 - 5.239: 97.9214% ( 1) 00:15:55.621 5.239 - 5.262: 97.9291% ( 1) 00:15:55.621 5.262 - 5.286: 97.9367% ( 1) 00:15:55.621 5.286 - 5.310: 97.9444% ( 1) 00:15:55.621 5.310 - 5.333: 97.9597% ( 2) 00:15:55.621 5.357 - 5.381: 97.9749% ( 2) 00:15:55.621 5.381 - 5.404: 97.9902% ( 2) 00:15:55.621 5.404 - 5.428: 97.9979% ( 1) 00:15:55.621 5.902 - 5.926: 98.0055% ( 1) 00:15:55.621 6.021 - 6.044: 98.0131% ( 1) 00:15:55.621 6.400 - 6.447: 98.0208% ( 1) 00:15:55.621 6.447 - 6.495: 98.0284% ( 1) 00:15:55.621 6.921 - 6.969: 98.0437% ( 2) 00:15:55.621 7.206 - 7.253: 98.0514% ( 1) 00:15:55.621 7.253 - 7.301: 98.0666% ( 2) 00:15:55.621 7.301 - 7.348: 98.0819% ( 2) 00:15:55.622 7.348 - 7.396: 98.0972% ( 2) 00:15:55.622 7.490 - 7.538: 98.1048% ( 1) 00:15:55.622 7.538 - 7.585: 98.1125% ( 1) 00:15:55.622 7.680 - 7.727: 98.1278% ( 2) 00:15:55.622 7.822 - 7.870: 98.1431% ( 2) 00:15:55.622 7.870 - 7.917: 98.1507% ( 1) 00:15:55.622 7.964 - 8.012: 98.1660% ( 2) 00:15:55.622 8.012 - 8.059: 98.1736% ( 1) 00:15:55.622 8.107 - 8.154: 98.1813% ( 1) 00:15:55.622 8.154 - 8.201: 98.1889% ( 1) 00:15:55.622 8.201 - 8.249: 98.2118% ( 3) 00:15:55.622 8.249 - 8.296: 98.2195% ( 1) 00:15:55.622 8.296 - 8.344: 98.2348% ( 2) 00:15:55.622 8.439 - 8.486: 98.2577% ( 3) 00:15:55.622 8.486 - 8.533: 98.2730% ( 2) 00:15:55.622 8.533 - 8.581: 98.2806% ( 1) 00:15:55.622 8.581 - 8.628: 98.2882% ( 1) 00:15:55.622 8.676 - 8.723: 98.2959% ( 1) 00:15:55.622 8.723 - 8.770: 98.3112% ( 2) 00:15:55.622 8.770 - 8.818: 98.3188% ( 1) 00:15:55.622 8.865 - 8.913: 98.3341% ( 2) 00:15:55.622 8.913 - 8.960: 98.3417% ( 1) 00:15:55.622 8.960 - 9.007: 98.3494% ( 1) 00:15:55.622 9.007 - 9.055: 98.3570% ( 1) 00:15:55.622 9.102 - 9.150: 98.3647% ( 1) 00:15:55.622 9.244 - 9.292: 98.3723% ( 1) 00:15:55.622 9.292 - 9.339: 98.3799% ( 1) 00:15:55.622 9.529 - 9.576: 98.3876% ( 1) 00:15:55.622 9.861 - 9.908: 98.3952% ( 1) 00:15:55.622 9.956 - 10.003: 98.4029% ( 1) 00:15:55.622 10.003 - 10.050: 98.4105% ( 1) 00:15:55.622 10.050 - 10.098: 98.4182% ( 1) 00:15:55.622 10.098 - 10.145: 98.4258% ( 1) 00:15:55.622 10.193 - 10.240: 98.4487% ( 3) 00:15:55.622 10.430 - 10.477: 98.4564% ( 1) 00:15:55.622 10.477 - 10.524: 98.4640% ( 1) 00:15:55.622 10.667 - 10.714: 98.4716% ( 1) 00:15:55.622 10.809 - 10.856: 98.4869% ( 2) 00:15:55.622 10.904 - 10.951: 98.4946% ( 1) 00:15:55.622 10.951 - 10.999: 98.5022% ( 1) 00:15:55.622 10.999 - 11.046: 98.5099% ( 1) 00:15:55.622 11.093 - 11.141: 98.5175% ( 1) 00:15:55.622 11.188 - 11.236: 98.5251% ( 1) 00:15:55.622 11.283 - 11.330: 98.5328% ( 1) 00:15:55.622 11.425 - 11.473: 98.5404% ( 1) 00:15:55.622 11.473 - 11.520: 98.5481% ( 1) 00:15:55.622 11.757 - 11.804: 98.5557% ( 1) 00:15:55.622 12.041 - 12.089: 98.5634% ( 1) 00:15:55.622 12.231 - 12.326: 98.5786% ( 2) 00:15:55.622 12.326 - 12.421: 98.5863% ( 1) 00:15:55.622 12.610 - 12.705: 98.5939% ( 1) 00:15:55.622 12.705 - 12.800: 98.6016% ( 1) 00:15:55.622 12.895 - 12.990: 98.6092% ( 1) 00:15:55.622 12.990 - 13.084: 98.6168% ( 1) 00:15:55.622 13.084 - 13.179: 98.6245% ( 1) 00:15:55.622 13.274 - 13.369: 98.6321% ( 1) 00:15:55.622 13.369 - 13.464: 98.6398% ( 1) 00:15:55.622 13.464 - 13.559: 98.6474% ( 1) 00:15:55.622 13.559 - 13.653: 98.6551% ( 1) 00:15:55.622 13.653 - 13.748: 98.6627% ( 1) 00:15:55.622 13.748 - 13.843: 98.6703% ( 1) 00:15:55.622 14.033 - 14.127: 98.6780% ( 1) 00:15:55.622 14.412 - 14.507: 98.6933% ( 2) 00:15:55.622 14.601 - 14.696: 98.7085% ( 2) 00:15:55.622 14.696 - 14.791: 98.7238% ( 2) 00:15:55.622 14.791 - 14.886: 98.7315% ( 1) 00:15:55.622 15.170 - 15.265: 98.7391% ( 1) 00:15:55.622 15.455 - 15.550: 98.7468% ( 1) 00:15:55.622 15.550 - 15.644: 98.7544% ( 1) 00:15:55.622 17.161 - 17.256: 98.7620% ( 1) 00:15:55.622 17.256 - 17.351: 98.7773% ( 2) 00:15:55.622 17.351 - 17.446: 98.8002% ( 3) 00:15:55.622 17.446 - 17.541: 98.8232% ( 3) 00:15:55.622 17.541 - 17.636: 98.8537% ( 4) 00:15:55.622 17.636 - 17.730: 98.9378% ( 11) 00:15:55.622 17.730 - 17.825: 98.9989% ( 8) 00:15:55.622 17.825 - 17.920: 99.0601% ( 8) 00:15:55.622 17.920 - 18.015: 99.1288% ( 9) 00:15:55.622 18.015 - 18.110: 99.1747% ( 6) 00:15:55.622 18.110 - 18.204: 99.2587% ( 11) 00:15:55.622 18.204 - 18.299: 99.3657% ( 14) 00:15:55.622 18.299 - 18.394: 99.4345% ( 9) 00:15:55.622 18.394 - 18.489: 99.4956% ( 8) 00:15:55.622 18.489 - 18.584: 99.5797% ( 11) 00:15:55.622 18.584 - 18.679: 99.6485% ( 9) 00:15:55.622 18.679 - 18.773: 99.6790% ( 4) 00:15:55.622 18.773 - 18.868: 99.7249% ( 6) 00:15:55.622 18.868 - 18.963: 99.7478% ( 3) 00:15:55.622 18.963 - 19.058: 99.7707% ( 3) 00:15:55.622 19.058 - 19.153: 99.8013% ( 4) 00:15:55.622 19.342 - 19.437: 99.8166% ( 2) 00:15:55.622 19.437 - 19.532: 99.8242% ( 1) 00:15:55.622 19.816 - 19.911: 99.8319% ( 1) 00:15:55.622 20.006 - 20.101: 99.8395% ( 1) 00:15:55.622 21.333 - 21.428: 99.8472% ( 1) 00:15:55.622 21.807 - 21.902: 99.8548% ( 1) 00:15:55.622 22.187 - 22.281: 99.8624% ( 1) 00:15:55.622 23.324 - 23.419: 99.8701% ( 1) 00:15:55.622 23.514 - 23.609: 99.8777% ( 1) 00:15:55.622 24.273 - 24.462: 99.8930% ( 2) 00:15:55.622 26.169 - 26.359: 99.9007% ( 1) 00:15:55.622 3980.705 - 4004.978: 99.9924% ( 12) 00:15:55.622 4004.978 - 4029.250: 100.0000% ( 1) 00:15:55.622 00:15:55.622 Complete histogram 00:15:55.622 ================== 00:15:55.622 Range in us Cumulative Count 00:15:55.622 2.039 - 2.050: 0.7336% ( 96) 00:15:55.622 2.050 - 2.062: 7.4813% ( 883) 00:15:55.622 2.062 - 2.074: 9.8120% ( 305) 00:15:55.622 2.074 - 2.086: 26.6850% ( 2208) 00:15:55.622 2.086 - 2.098: 52.7205% ( 3407) 00:15:55.622 2.098 - 2.110: 58.1155% ( 706) 00:15:55.622 2.110 - 2.121: 62.1275% ( 525) 00:15:55.622 2.121 - 2.133: 64.7409% ( 342) 00:15:55.622 2.133 - 2.145: 65.8490% ( 145) 00:15:55.622 2.145 - 2.157: 73.6359% ( 1019) 00:15:55.622 2.157 - 2.169: 80.1773% ( 856) 00:15:55.622 2.169 - 2.181: 81.5528% ( 180) 00:15:55.622 2.181 - 2.193: 82.7602% ( 158) 00:15:55.622 2.193 - 2.204: 84.1969% ( 188) 00:15:55.622 2.204 - 2.216: 85.1215% ( 121) 00:15:55.622 2.216 - 2.228: 88.3158% ( 418) 00:15:55.622 2.228 - 2.240: 91.8921% ( 468) 00:15:55.622 2.240 - 2.252: 93.3287% ( 188) 00:15:55.622 2.252 - 2.264: 94.0012% ( 88) 00:15:55.622 2.264 - 2.276: 94.4674% ( 61) 00:15:55.622 2.276 - 2.287: 94.7578% ( 38) 00:15:55.622 2.287 - 2.299: 94.9106% ( 20) 00:15:55.622 2.299 - 2.311: 95.3080% ( 52) 00:15:55.622 2.311 - 2.323: 95.7359% ( 56) 00:15:55.622 2.323 - 2.335: 95.9040% ( 22) 00:15:55.622 2.335 - 2.347: 95.9652% ( 8) 00:15:55.622 2.347 - 2.359: 96.0339% ( 9) 00:15:55.622 2.359 - 2.370: 96.1409% ( 14) 00:15:55.622 2.370 - 2.382: 96.2861% ( 19) 00:15:55.622 2.382 - 2.394: 96.4924% ( 27) 00:15:55.622 2.394 - 2.406: 96.7523% ( 34) 00:15:55.622 2.406 - 2.418: 96.9051% ( 20) 00:15:55.622 2.418 - 2.430: 97.0579% ( 20) 00:15:55.622 2.430 - 2.441: 97.2260% ( 22) 00:15:55.622 2.441 - 2.453: 97.4094% ( 24) 00:15:55.622 2.453 - 2.465: 97.5317% ( 16) 00:15:55.622 2.465 - 2.477: 97.6463% ( 15) 00:15:55.622 2.477 - 2.489: 97.8297% ( 24) 00:15:55.622 2.489 - 2.501: 97.9597% ( 17) 00:15:55.622 2.501 - 2.513: 98.0284% ( 9) 00:15:55.622 2.513 - 2.524: 98.1048% ( 10) 00:15:55.622 2.524 - 2.536: 98.1583% ( 7) 00:15:55.622 2.536 - 2.548: 98.2042% ( 6) 00:15:55.622 2.548 - 2.560: 98.2348% ( 4) 00:15:55.622 2.560 - 2.572: 98.2577% ( 3) 00:15:55.622 2.572 - 2.584: 98.2730% ( 2) 00:15:55.622 2.584 - 2.596: 98.2959% ( 3) 00:15:55.622 2.631 - 2.643: 98.3035% ( 1) 00:15:55.622 2.643 - 2.655: 98.3112% ( 1) 00:15:55.622 2.679 - 2.690: 98.3188% ( 1) 00:15:55.622 2.690 - 2.702: 98.3265% ( 1) 00:15:55.622 2.761 - 2.773: 98.3341% ( 1) 00:15:55.622 2.773 - 2.785: 98.3417% ( 1) 00:15:55.622 2.880 - 2.892: 98.3494% ( 1) 00:15:55.622 2.939 - 2.951: 98.3570% ( 1) 00:15:55.622 3.034 - 3.058: 98.3647% ( 1) 00:15:55.622 3.129 - 3.153: 98.3723% ( 1) 00:15:55.622 3.271 - 3.295: 98.3799% ( 1) 00:15:55.622 3.366 - 3.390: 98.4029% ( 3) 00:15:55.622 3.390 - 3.413: 98.4105% ( 1) 00:15:55.622 3.413 - 3.437: 98.4411% ( 4) 00:15:55.622 3.437 - 3.461: 98.4487% ( 1) 00:15:55.622 3.461 - 3.484: 9[2024-04-15 18:02:44.506941] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:55.622 8.4640% ( 2) 00:15:55.622 3.484 - 3.508: 98.4716% ( 1) 00:15:55.622 3.532 - 3.556: 98.4793% ( 1) 00:15:55.622 3.556 - 3.579: 98.4946% ( 2) 00:15:55.622 3.579 - 3.603: 98.5175% ( 3) 00:15:55.622 3.627 - 3.650: 98.5404% ( 3) 00:15:55.622 3.650 - 3.674: 98.5481% ( 1) 00:15:55.622 3.793 - 3.816: 98.5557% ( 1) 00:15:55.622 3.840 - 3.864: 98.5634% ( 1) 00:15:55.622 3.887 - 3.911: 98.5710% ( 1) 00:15:55.622 4.172 - 4.196: 98.5786% ( 1) 00:15:55.622 5.452 - 5.476: 98.5863% ( 1) 00:15:55.622 5.641 - 5.665: 98.5939% ( 1) 00:15:55.622 5.831 - 5.855: 98.6016% ( 1) 00:15:55.622 6.021 - 6.044: 98.6092% ( 1) 00:15:55.622 6.068 - 6.116: 98.6168% ( 1) 00:15:55.622 6.116 - 6.163: 98.6245% ( 1) 00:15:55.622 6.210 - 6.258: 98.6321% ( 1) 00:15:55.622 6.305 - 6.353: 98.6551% ( 3) 00:15:55.622 6.400 - 6.447: 98.6627% ( 1) 00:15:55.622 6.447 - 6.495: 98.6703% ( 1) 00:15:55.622 6.542 - 6.590: 98.6780% ( 1) 00:15:55.622 6.827 - 6.874: 98.6856% ( 1) 00:15:55.622 6.874 - 6.921: 98.6933% ( 1) 00:15:55.622 6.969 - 7.016: 98.7009% ( 1) 00:15:55.622 7.064 - 7.111: 98.7085% ( 1) 00:15:55.622 7.111 - 7.159: 98.7162% ( 1) 00:15:55.622 7.206 - 7.253: 98.7238% ( 1) 00:15:55.622 7.727 - 7.775: 98.7391% ( 2) 00:15:55.622 7.870 - 7.917: 98.7468% ( 1) 00:15:55.622 8.154 - 8.201: 98.7544% ( 1) 00:15:55.622 8.391 - 8.439: 98.7620% ( 1) 00:15:55.622 8.723 - 8.770: 98.7697% ( 1) 00:15:55.622 9.150 - 9.197: 98.7773% ( 1) 00:15:55.622 9.387 - 9.434: 98.7850% ( 1) 00:15:55.622 13.179 - 13.274: 98.7926% ( 1) 00:15:55.622 15.455 - 15.550: 98.8079% ( 2) 00:15:55.622 15.644 - 15.739: 98.8232% ( 2) 00:15:55.622 15.739 - 15.834: 98.8308% ( 1) 00:15:55.622 15.834 - 15.929: 98.8385% ( 1) 00:15:55.622 15.929 - 16.024: 98.8614% ( 3) 00:15:55.622 16.024 - 16.119: 98.8919% ( 4) 00:15:55.622 16.119 - 16.213: 98.9302% ( 5) 00:15:55.622 16.213 - 16.308: 98.9684% ( 5) 00:15:55.622 16.308 - 16.403: 99.0066% ( 5) 00:15:55.622 16.403 - 16.498: 99.0524% ( 6) 00:15:55.622 16.498 - 16.593: 99.1136% ( 8) 00:15:55.622 16.593 - 16.687: 99.1594% ( 6) 00:15:55.622 16.687 - 16.782: 99.2053% ( 6) 00:15:55.622 16.782 - 16.877: 99.2358% ( 4) 00:15:55.622 16.877 - 16.972: 99.2817% ( 6) 00:15:55.622 16.972 - 17.067: 99.2893% ( 1) 00:15:55.622 17.161 - 17.256: 99.2970% ( 1) 00:15:55.622 17.351 - 17.446: 99.3122% ( 2) 00:15:55.622 17.636 - 17.730: 99.3199% ( 1) 00:15:55.622 17.730 - 17.825: 99.3275% ( 1) 00:15:55.622 17.825 - 17.920: 99.3352% ( 1) 00:15:55.622 17.920 - 18.015: 99.3428% ( 1) 00:15:55.622 18.394 - 18.489: 99.3505% ( 1) 00:15:55.622 19.532 - 19.627: 99.3581% ( 1) 00:15:55.622 20.101 - 20.196: 99.3657% ( 1) 00:15:55.622 122.121 - 122.880: 99.3734% ( 1) 00:15:55.622 3980.705 - 4004.978: 99.9465% ( 75) 00:15:55.622 4004.978 - 4029.250: 100.0000% ( 7) 00:15:55.622 00:15:55.622 18:02:44 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:55.622 18:02:44 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:55.622 18:02:44 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:55.622 18:02:44 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:55.622 18:02:44 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:56.188 [2024-04-15 18:02:45.047491] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:56.188 [ 00:15:56.188 { 00:15:56.188 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:56.188 "subtype": "Discovery", 00:15:56.188 "listen_addresses": [], 00:15:56.188 "allow_any_host": true, 00:15:56.188 "hosts": [] 00:15:56.188 }, 00:15:56.188 { 00:15:56.188 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:56.188 "subtype": "NVMe", 00:15:56.188 "listen_addresses": [ 00:15:56.188 { 00:15:56.188 "transport": "VFIOUSER", 00:15:56.188 "trtype": "VFIOUSER", 00:15:56.188 "adrfam": "IPv4", 00:15:56.188 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:56.188 "trsvcid": "0" 00:15:56.188 } 00:15:56.188 ], 00:15:56.188 "allow_any_host": true, 00:15:56.188 "hosts": [], 00:15:56.188 "serial_number": "SPDK1", 00:15:56.188 "model_number": "SPDK bdev Controller", 00:15:56.188 "max_namespaces": 32, 00:15:56.188 "min_cntlid": 1, 00:15:56.188 "max_cntlid": 65519, 00:15:56.188 "namespaces": [ 00:15:56.188 { 00:15:56.188 "nsid": 1, 00:15:56.188 "bdev_name": "Malloc1", 00:15:56.188 "name": "Malloc1", 00:15:56.188 "nguid": "B1E8891739D34F4EAD19DBCE9FC1B417", 00:15:56.188 "uuid": "b1e88917-39d3-4f4e-ad19-dbce9fc1b417" 00:15:56.188 } 00:15:56.188 ] 00:15:56.188 }, 00:15:56.188 { 00:15:56.188 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:56.188 "subtype": "NVMe", 00:15:56.188 "listen_addresses": [ 00:15:56.188 { 00:15:56.188 "transport": "VFIOUSER", 00:15:56.188 "trtype": "VFIOUSER", 00:15:56.188 "adrfam": "IPv4", 00:15:56.188 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:56.188 "trsvcid": "0" 00:15:56.188 } 00:15:56.188 ], 00:15:56.188 "allow_any_host": true, 00:15:56.188 "hosts": [], 00:15:56.188 "serial_number": "SPDK2", 00:15:56.188 "model_number": "SPDK bdev Controller", 00:15:56.188 "max_namespaces": 32, 00:15:56.188 "min_cntlid": 1, 00:15:56.188 "max_cntlid": 65519, 00:15:56.188 "namespaces": [ 00:15:56.188 { 00:15:56.188 "nsid": 1, 00:15:56.188 "bdev_name": "Malloc2", 00:15:56.188 "name": "Malloc2", 00:15:56.188 "nguid": "DB8831BB3C944E26A3836AFE4988445B", 00:15:56.188 "uuid": "db8831bb-3c94-4e26-a383-6afe4988445b" 00:15:56.188 } 00:15:56.188 ] 00:15:56.188 } 00:15:56.188 ] 00:15:56.188 18:02:45 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:56.188 18:02:45 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3294844 00:15:56.188 18:02:45 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:56.188 18:02:45 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:56.188 18:02:45 -- common/autotest_common.sh@1251 -- # local i=0 00:15:56.188 18:02:45 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:56.188 18:02:45 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:56.188 18:02:45 -- common/autotest_common.sh@1262 -- # return 0 00:15:56.188 18:02:45 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:56.188 18:02:45 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:56.188 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.446 [2024-04-15 18:02:45.225396] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:56.446 Malloc3 00:15:56.705 18:02:45 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:56.964 [2024-04-15 18:02:45.674728] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:56.964 18:02:45 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:56.964 Asynchronous Event Request test 00:15:56.964 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:56.964 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:56.964 Registering asynchronous event callbacks... 00:15:56.964 Starting namespace attribute notice tests for all controllers... 00:15:56.964 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:56.964 aer_cb - Changed Namespace 00:15:56.964 Cleaning up... 00:15:57.534 [ 00:15:57.534 { 00:15:57.534 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:57.534 "subtype": "Discovery", 00:15:57.534 "listen_addresses": [], 00:15:57.534 "allow_any_host": true, 00:15:57.534 "hosts": [] 00:15:57.534 }, 00:15:57.534 { 00:15:57.534 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:57.534 "subtype": "NVMe", 00:15:57.534 "listen_addresses": [ 00:15:57.534 { 00:15:57.534 "transport": "VFIOUSER", 00:15:57.534 "trtype": "VFIOUSER", 00:15:57.534 "adrfam": "IPv4", 00:15:57.534 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:57.534 "trsvcid": "0" 00:15:57.534 } 00:15:57.534 ], 00:15:57.534 "allow_any_host": true, 00:15:57.534 "hosts": [], 00:15:57.534 "serial_number": "SPDK1", 00:15:57.534 "model_number": "SPDK bdev Controller", 00:15:57.534 "max_namespaces": 32, 00:15:57.534 "min_cntlid": 1, 00:15:57.534 "max_cntlid": 65519, 00:15:57.534 "namespaces": [ 00:15:57.534 { 00:15:57.534 "nsid": 1, 00:15:57.534 "bdev_name": "Malloc1", 00:15:57.534 "name": "Malloc1", 00:15:57.534 "nguid": "B1E8891739D34F4EAD19DBCE9FC1B417", 00:15:57.534 "uuid": "b1e88917-39d3-4f4e-ad19-dbce9fc1b417" 00:15:57.534 }, 00:15:57.534 { 00:15:57.534 "nsid": 2, 00:15:57.534 "bdev_name": "Malloc3", 00:15:57.534 "name": "Malloc3", 00:15:57.534 "nguid": "AA8D346381E5432690061E7F029E9345", 00:15:57.534 "uuid": "aa8d3463-81e5-4326-9006-1e7f029e9345" 00:15:57.534 } 00:15:57.534 ] 00:15:57.534 }, 00:15:57.534 { 00:15:57.534 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:57.534 "subtype": "NVMe", 00:15:57.534 "listen_addresses": [ 00:15:57.534 { 00:15:57.534 "transport": "VFIOUSER", 00:15:57.534 "trtype": "VFIOUSER", 00:15:57.534 "adrfam": "IPv4", 00:15:57.534 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:57.534 "trsvcid": "0" 00:15:57.534 } 00:15:57.534 ], 00:15:57.534 "allow_any_host": true, 00:15:57.534 "hosts": [], 00:15:57.534 "serial_number": "SPDK2", 00:15:57.534 "model_number": "SPDK bdev Controller", 00:15:57.534 "max_namespaces": 32, 00:15:57.534 "min_cntlid": 1, 00:15:57.534 "max_cntlid": 65519, 00:15:57.534 "namespaces": [ 00:15:57.534 { 00:15:57.534 "nsid": 1, 00:15:57.534 "bdev_name": "Malloc2", 00:15:57.534 "name": "Malloc2", 00:15:57.534 "nguid": "DB8831BB3C944E26A3836AFE4988445B", 00:15:57.534 "uuid": "db8831bb-3c94-4e26-a383-6afe4988445b" 00:15:57.534 } 00:15:57.534 ] 00:15:57.534 } 00:15:57.534 ] 00:15:57.534 18:02:46 -- target/nvmf_vfio_user.sh@44 -- # wait 3294844 00:15:57.534 18:02:46 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:57.534 18:02:46 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:57.534 18:02:46 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:57.534 18:02:46 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:57.534 [2024-04-15 18:02:46.290453] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:57.534 [2024-04-15 18:02:46.290496] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294988 ] 00:15:57.534 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.534 [2024-04-15 18:02:46.326182] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:57.534 [2024-04-15 18:02:46.334397] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:57.534 [2024-04-15 18:02:46.334426] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f19b1514000 00:15:57.534 [2024-04-15 18:02:46.335381] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:57.534 [2024-04-15 18:02:46.336394] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:57.534 [2024-04-15 18:02:46.337401] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:57.534 [2024-04-15 18:02:46.338412] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:57.534 [2024-04-15 18:02:46.339421] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:57.534 [2024-04-15 18:02:46.340429] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:57.534 [2024-04-15 18:02:46.341436] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:57.534 [2024-04-15 18:02:46.342438] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:57.534 [2024-04-15 18:02:46.343446] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:57.534 [2024-04-15 18:02:46.343468] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f19b02ca000 00:15:57.534 [2024-04-15 18:02:46.344579] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:57.534 [2024-04-15 18:02:46.360700] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:57.534 [2024-04-15 18:02:46.360736] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:57.534 [2024-04-15 18:02:46.365853] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:57.534 [2024-04-15 18:02:46.365909] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:57.534 [2024-04-15 18:02:46.366001] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:57.534 [2024-04-15 18:02:46.366029] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:57.535 [2024-04-15 18:02:46.366054] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:57.535 [2024-04-15 18:02:46.366846] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:57.535 [2024-04-15 18:02:46.366867] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:57.535 [2024-04-15 18:02:46.366879] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:57.535 [2024-04-15 18:02:46.367849] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:57.535 [2024-04-15 18:02:46.367870] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:57.535 [2024-04-15 18:02:46.367884] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:57.535 [2024-04-15 18:02:46.368858] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:57.535 [2024-04-15 18:02:46.368879] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:57.535 [2024-04-15 18:02:46.369867] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:57.535 [2024-04-15 18:02:46.369888] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:57.535 [2024-04-15 18:02:46.369897] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:57.535 [2024-04-15 18:02:46.369909] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:57.535 [2024-04-15 18:02:46.370020] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:57.535 [2024-04-15 18:02:46.370028] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:57.535 [2024-04-15 18:02:46.370037] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:57.535 [2024-04-15 18:02:46.370870] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:57.535 [2024-04-15 18:02:46.371881] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:57.535 [2024-04-15 18:02:46.372896] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:57.535 [2024-04-15 18:02:46.373886] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:57.535 [2024-04-15 18:02:46.373953] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:57.535 [2024-04-15 18:02:46.374908] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:57.535 [2024-04-15 18:02:46.374928] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:57.535 [2024-04-15 18:02:46.374938] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.374962] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:57.535 [2024-04-15 18:02:46.374975] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.375000] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:57.535 [2024-04-15 18:02:46.375010] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:57.535 [2024-04-15 18:02:46.375030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:57.535 [2024-04-15 18:02:46.379081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:57.535 [2024-04-15 18:02:46.379106] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:57.535 [2024-04-15 18:02:46.379116] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:57.535 [2024-04-15 18:02:46.379125] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:57.535 [2024-04-15 18:02:46.379133] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:57.535 [2024-04-15 18:02:46.379141] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:57.535 [2024-04-15 18:02:46.379150] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:57.535 [2024-04-15 18:02:46.379158] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.379172] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.379189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:57.535 [2024-04-15 18:02:46.387086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:57.535 [2024-04-15 18:02:46.387116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.535 [2024-04-15 18:02:46.387131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.535 [2024-04-15 18:02:46.387147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.535 [2024-04-15 18:02:46.387160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.535 [2024-04-15 18:02:46.387169] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.387185] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.387200] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:57.535 [2024-04-15 18:02:46.395071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:57.535 [2024-04-15 18:02:46.395091] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:57.535 [2024-04-15 18:02:46.395101] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.395118] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.395130] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.395144] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:57.535 [2024-04-15 18:02:46.403070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:57.535 [2024-04-15 18:02:46.403130] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.403146] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.403160] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:57.535 [2024-04-15 18:02:46.403169] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:57.535 [2024-04-15 18:02:46.403179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:57.535 [2024-04-15 18:02:46.411068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:57.535 [2024-04-15 18:02:46.411093] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:57.535 [2024-04-15 18:02:46.411110] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.411126] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.411139] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:57.535 [2024-04-15 18:02:46.411147] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:57.535 [2024-04-15 18:02:46.411158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:57.535 [2024-04-15 18:02:46.419070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:57.535 [2024-04-15 18:02:46.419104] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.419122] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.419135] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:57.535 [2024-04-15 18:02:46.419144] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:57.535 [2024-04-15 18:02:46.419154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:57.535 [2024-04-15 18:02:46.427071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:57.535 [2024-04-15 18:02:46.427092] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.427105] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.427121] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.427132] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:57.535 [2024-04-15 18:02:46.427142] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:57.536 [2024-04-15 18:02:46.427151] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:57.536 [2024-04-15 18:02:46.427159] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:57.536 [2024-04-15 18:02:46.427168] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:57.536 [2024-04-15 18:02:46.427195] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:57.536 [2024-04-15 18:02:46.435068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:57.536 [2024-04-15 18:02:46.435096] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:57.536 [2024-04-15 18:02:46.443069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:57.536 [2024-04-15 18:02:46.443095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:57.536 [2024-04-15 18:02:46.451070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:57.536 [2024-04-15 18:02:46.451095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:57.536 [2024-04-15 18:02:46.459084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:57.536 [2024-04-15 18:02:46.459111] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:57.536 [2024-04-15 18:02:46.459121] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:57.536 [2024-04-15 18:02:46.459128] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:57.536 [2024-04-15 18:02:46.459135] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:57.536 [2024-04-15 18:02:46.459149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:57.536 [2024-04-15 18:02:46.459162] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:57.536 [2024-04-15 18:02:46.459171] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:57.536 [2024-04-15 18:02:46.459181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:57.536 [2024-04-15 18:02:46.459192] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:57.536 [2024-04-15 18:02:46.459200] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:57.536 [2024-04-15 18:02:46.459210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:57.536 [2024-04-15 18:02:46.459223] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:57.536 [2024-04-15 18:02:46.459231] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:57.536 [2024-04-15 18:02:46.459241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:57.536 [2024-04-15 18:02:46.467071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:57.536 [2024-04-15 18:02:46.467099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:57.536 [2024-04-15 18:02:46.467116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:57.536 [2024-04-15 18:02:46.467128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:57.536 ===================================================== 00:15:57.536 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:57.536 ===================================================== 00:15:57.536 Controller Capabilities/Features 00:15:57.536 ================================ 00:15:57.536 Vendor ID: 4e58 00:15:57.536 Subsystem Vendor ID: 4e58 00:15:57.536 Serial Number: SPDK2 00:15:57.536 Model Number: SPDK bdev Controller 00:15:57.536 Firmware Version: 24.05 00:15:57.536 Recommended Arb Burst: 6 00:15:57.536 IEEE OUI Identifier: 8d 6b 50 00:15:57.536 Multi-path I/O 00:15:57.536 May have multiple subsystem ports: Yes 00:15:57.536 May have multiple controllers: Yes 00:15:57.536 Associated with SR-IOV VF: No 00:15:57.536 Max Data Transfer Size: 131072 00:15:57.536 Max Number of Namespaces: 32 00:15:57.536 Max Number of I/O Queues: 127 00:15:57.536 NVMe Specification Version (VS): 1.3 00:15:57.536 NVMe Specification Version (Identify): 1.3 00:15:57.536 Maximum Queue Entries: 256 00:15:57.536 Contiguous Queues Required: Yes 00:15:57.536 Arbitration Mechanisms Supported 00:15:57.536 Weighted Round Robin: Not Supported 00:15:57.536 Vendor Specific: Not Supported 00:15:57.536 Reset Timeout: 15000 ms 00:15:57.536 Doorbell Stride: 4 bytes 00:15:57.536 NVM Subsystem Reset: Not Supported 00:15:57.536 Command Sets Supported 00:15:57.536 NVM Command Set: Supported 00:15:57.536 Boot Partition: Not Supported 00:15:57.536 Memory Page Size Minimum: 4096 bytes 00:15:57.536 Memory Page Size Maximum: 4096 bytes 00:15:57.536 Persistent Memory Region: Not Supported 00:15:57.536 Optional Asynchronous Events Supported 00:15:57.536 Namespace Attribute Notices: Supported 00:15:57.536 Firmware Activation Notices: Not Supported 00:15:57.536 ANA Change Notices: Not Supported 00:15:57.536 PLE Aggregate Log Change Notices: Not Supported 00:15:57.536 LBA Status Info Alert Notices: Not Supported 00:15:57.536 EGE Aggregate Log Change Notices: Not Supported 00:15:57.536 Normal NVM Subsystem Shutdown event: Not Supported 00:15:57.536 Zone Descriptor Change Notices: Not Supported 00:15:57.536 Discovery Log Change Notices: Not Supported 00:15:57.536 Controller Attributes 00:15:57.536 128-bit Host Identifier: Supported 00:15:57.536 Non-Operational Permissive Mode: Not Supported 00:15:57.536 NVM Sets: Not Supported 00:15:57.536 Read Recovery Levels: Not Supported 00:15:57.536 Endurance Groups: Not Supported 00:15:57.536 Predictable Latency Mode: Not Supported 00:15:57.536 Traffic Based Keep ALive: Not Supported 00:15:57.536 Namespace Granularity: Not Supported 00:15:57.536 SQ Associations: Not Supported 00:15:57.536 UUID List: Not Supported 00:15:57.536 Multi-Domain Subsystem: Not Supported 00:15:57.536 Fixed Capacity Management: Not Supported 00:15:57.536 Variable Capacity Management: Not Supported 00:15:57.536 Delete Endurance Group: Not Supported 00:15:57.536 Delete NVM Set: Not Supported 00:15:57.536 Extended LBA Formats Supported: Not Supported 00:15:57.536 Flexible Data Placement Supported: Not Supported 00:15:57.536 00:15:57.536 Controller Memory Buffer Support 00:15:57.536 ================================ 00:15:57.536 Supported: No 00:15:57.536 00:15:57.536 Persistent Memory Region Support 00:15:57.536 ================================ 00:15:57.536 Supported: No 00:15:57.536 00:15:57.536 Admin Command Set Attributes 00:15:57.536 ============================ 00:15:57.536 Security Send/Receive: Not Supported 00:15:57.536 Format NVM: Not Supported 00:15:57.536 Firmware Activate/Download: Not Supported 00:15:57.536 Namespace Management: Not Supported 00:15:57.536 Device Self-Test: Not Supported 00:15:57.536 Directives: Not Supported 00:15:57.536 NVMe-MI: Not Supported 00:15:57.536 Virtualization Management: Not Supported 00:15:57.536 Doorbell Buffer Config: Not Supported 00:15:57.536 Get LBA Status Capability: Not Supported 00:15:57.536 Command & Feature Lockdown Capability: Not Supported 00:15:57.536 Abort Command Limit: 4 00:15:57.536 Async Event Request Limit: 4 00:15:57.536 Number of Firmware Slots: N/A 00:15:57.536 Firmware Slot 1 Read-Only: N/A 00:15:57.536 Firmware Activation Without Reset: N/A 00:15:57.536 Multiple Update Detection Support: N/A 00:15:57.536 Firmware Update Granularity: No Information Provided 00:15:57.536 Per-Namespace SMART Log: No 00:15:57.536 Asymmetric Namespace Access Log Page: Not Supported 00:15:57.536 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:57.536 Command Effects Log Page: Supported 00:15:57.536 Get Log Page Extended Data: Supported 00:15:57.536 Telemetry Log Pages: Not Supported 00:15:57.536 Persistent Event Log Pages: Not Supported 00:15:57.536 Supported Log Pages Log Page: May Support 00:15:57.536 Commands Supported & Effects Log Page: Not Supported 00:15:57.536 Feature Identifiers & Effects Log Page:May Support 00:15:57.536 NVMe-MI Commands & Effects Log Page: May Support 00:15:57.536 Data Area 4 for Telemetry Log: Not Supported 00:15:57.536 Error Log Page Entries Supported: 128 00:15:57.536 Keep Alive: Supported 00:15:57.536 Keep Alive Granularity: 10000 ms 00:15:57.536 00:15:57.536 NVM Command Set Attributes 00:15:57.536 ========================== 00:15:57.536 Submission Queue Entry Size 00:15:57.536 Max: 64 00:15:57.536 Min: 64 00:15:57.536 Completion Queue Entry Size 00:15:57.536 Max: 16 00:15:57.536 Min: 16 00:15:57.536 Number of Namespaces: 32 00:15:57.536 Compare Command: Supported 00:15:57.536 Write Uncorrectable Command: Not Supported 00:15:57.536 Dataset Management Command: Supported 00:15:57.537 Write Zeroes Command: Supported 00:15:57.537 Set Features Save Field: Not Supported 00:15:57.537 Reservations: Not Supported 00:15:57.537 Timestamp: Not Supported 00:15:57.537 Copy: Supported 00:15:57.537 Volatile Write Cache: Present 00:15:57.537 Atomic Write Unit (Normal): 1 00:15:57.537 Atomic Write Unit (PFail): 1 00:15:57.537 Atomic Compare & Write Unit: 1 00:15:57.537 Fused Compare & Write: Supported 00:15:57.537 Scatter-Gather List 00:15:57.537 SGL Command Set: Supported (Dword aligned) 00:15:57.537 SGL Keyed: Not Supported 00:15:57.537 SGL Bit Bucket Descriptor: Not Supported 00:15:57.537 SGL Metadata Pointer: Not Supported 00:15:57.537 Oversized SGL: Not Supported 00:15:57.537 SGL Metadata Address: Not Supported 00:15:57.537 SGL Offset: Not Supported 00:15:57.537 Transport SGL Data Block: Not Supported 00:15:57.537 Replay Protected Memory Block: Not Supported 00:15:57.537 00:15:57.537 Firmware Slot Information 00:15:57.537 ========================= 00:15:57.537 Active slot: 1 00:15:57.537 Slot 1 Firmware Revision: 24.05 00:15:57.537 00:15:57.537 00:15:57.537 Commands Supported and Effects 00:15:57.537 ============================== 00:15:57.537 Admin Commands 00:15:57.537 -------------- 00:15:57.537 Get Log Page (02h): Supported 00:15:57.537 Identify (06h): Supported 00:15:57.537 Abort (08h): Supported 00:15:57.537 Set Features (09h): Supported 00:15:57.537 Get Features (0Ah): Supported 00:15:57.537 Asynchronous Event Request (0Ch): Supported 00:15:57.537 Keep Alive (18h): Supported 00:15:57.537 I/O Commands 00:15:57.537 ------------ 00:15:57.537 Flush (00h): Supported LBA-Change 00:15:57.537 Write (01h): Supported LBA-Change 00:15:57.537 Read (02h): Supported 00:15:57.537 Compare (05h): Supported 00:15:57.537 Write Zeroes (08h): Supported LBA-Change 00:15:57.537 Dataset Management (09h): Supported LBA-Change 00:15:57.537 Copy (19h): Supported LBA-Change 00:15:57.537 Unknown (79h): Supported LBA-Change 00:15:57.537 Unknown (7Ah): Supported 00:15:57.537 00:15:57.537 Error Log 00:15:57.537 ========= 00:15:57.537 00:15:57.537 Arbitration 00:15:57.537 =========== 00:15:57.537 Arbitration Burst: 1 00:15:57.537 00:15:57.537 Power Management 00:15:57.537 ================ 00:15:57.537 Number of Power States: 1 00:15:57.537 Current Power State: Power State #0 00:15:57.537 Power State #0: 00:15:57.537 Max Power: 0.00 W 00:15:57.537 Non-Operational State: Operational 00:15:57.537 Entry Latency: Not Reported 00:15:57.537 Exit Latency: Not Reported 00:15:57.537 Relative Read Throughput: 0 00:15:57.537 Relative Read Latency: 0 00:15:57.537 Relative Write Throughput: 0 00:15:57.537 Relative Write Latency: 0 00:15:57.537 Idle Power: Not Reported 00:15:57.537 Active Power: Not Reported 00:15:57.537 Non-Operational Permissive Mode: Not Supported 00:15:57.537 00:15:57.537 Health Information 00:15:57.537 ================== 00:15:57.537 Critical Warnings: 00:15:57.537 Available Spare Space: OK 00:15:57.537 Temperature: OK 00:15:57.537 Device Reliability: OK 00:15:57.537 Read Only: No 00:15:57.537 Volatile Memory Backup: OK 00:15:57.537 Current Temperature: 0 Kelvin (-2[2024-04-15 18:02:46.467257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:57.537 [2024-04-15 18:02:46.475071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:57.537 [2024-04-15 18:02:46.475120] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:57.537 [2024-04-15 18:02:46.475139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.537 [2024-04-15 18:02:46.475150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.537 [2024-04-15 18:02:46.475160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.537 [2024-04-15 18:02:46.475171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.537 [2024-04-15 18:02:46.475237] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:57.537 [2024-04-15 18:02:46.475258] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:57.537 [2024-04-15 18:02:46.476241] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:57.537 [2024-04-15 18:02:46.476313] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:57.537 [2024-04-15 18:02:46.476337] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:57.537 [2024-04-15 18:02:46.477250] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:57.537 [2024-04-15 18:02:46.477279] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:57.537 [2024-04-15 18:02:46.477333] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:57.537 [2024-04-15 18:02:46.480085] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:57.796 73 Celsius) 00:15:57.796 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:57.796 Available Spare: 0% 00:15:57.796 Available Spare Threshold: 0% 00:15:57.796 Life Percentage Used: 0% 00:15:57.796 Data Units Read: 0 00:15:57.796 Data Units Written: 0 00:15:57.796 Host Read Commands: 0 00:15:57.796 Host Write Commands: 0 00:15:57.796 Controller Busy Time: 0 minutes 00:15:57.796 Power Cycles: 0 00:15:57.796 Power On Hours: 0 hours 00:15:57.796 Unsafe Shutdowns: 0 00:15:57.796 Unrecoverable Media Errors: 0 00:15:57.796 Lifetime Error Log Entries: 0 00:15:57.796 Warning Temperature Time: 0 minutes 00:15:57.796 Critical Temperature Time: 0 minutes 00:15:57.796 00:15:57.796 Number of Queues 00:15:57.796 ================ 00:15:57.796 Number of I/O Submission Queues: 127 00:15:57.796 Number of I/O Completion Queues: 127 00:15:57.796 00:15:57.796 Active Namespaces 00:15:57.796 ================= 00:15:57.796 Namespace ID:1 00:15:57.796 Error Recovery Timeout: Unlimited 00:15:57.796 Command Set Identifier: NVM (00h) 00:15:57.796 Deallocate: Supported 00:15:57.796 Deallocated/Unwritten Error: Not Supported 00:15:57.796 Deallocated Read Value: Unknown 00:15:57.796 Deallocate in Write Zeroes: Not Supported 00:15:57.796 Deallocated Guard Field: 0xFFFF 00:15:57.796 Flush: Supported 00:15:57.796 Reservation: Supported 00:15:57.796 Namespace Sharing Capabilities: Multiple Controllers 00:15:57.796 Size (in LBAs): 131072 (0GiB) 00:15:57.796 Capacity (in LBAs): 131072 (0GiB) 00:15:57.796 Utilization (in LBAs): 131072 (0GiB) 00:15:57.796 NGUID: DB8831BB3C944E26A3836AFE4988445B 00:15:57.796 UUID: db8831bb-3c94-4e26-a383-6afe4988445b 00:15:57.796 Thin Provisioning: Not Supported 00:15:57.796 Per-NS Atomic Units: Yes 00:15:57.796 Atomic Boundary Size (Normal): 0 00:15:57.796 Atomic Boundary Size (PFail): 0 00:15:57.796 Atomic Boundary Offset: 0 00:15:57.796 Maximum Single Source Range Length: 65535 00:15:57.796 Maximum Copy Length: 65535 00:15:57.796 Maximum Source Range Count: 1 00:15:57.796 NGUID/EUI64 Never Reused: No 00:15:57.796 Namespace Write Protected: No 00:15:57.796 Number of LBA Formats: 1 00:15:57.796 Current LBA Format: LBA Format #00 00:15:57.796 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:57.796 00:15:57.796 18:02:46 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:57.796 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.796 [2024-04-15 18:02:46.716849] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:03.070 [2024-04-15 18:02:51.824436] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:03.070 Initializing NVMe Controllers 00:16:03.070 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:03.070 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:03.070 Initialization complete. Launching workers. 00:16:03.070 ======================================================== 00:16:03.070 Latency(us) 00:16:03.070 Device Information : IOPS MiB/s Average min max 00:16:03.070 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33564.60 131.11 3814.90 1220.34 9595.01 00:16:03.070 ======================================================== 00:16:03.070 Total : 33564.60 131.11 3814.90 1220.34 9595.01 00:16:03.070 00:16:03.070 18:02:51 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:03.070 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.329 [2024-04-15 18:02:52.116250] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:08.628 [2024-04-15 18:02:57.138370] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:08.628 Initializing NVMe Controllers 00:16:08.628 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:08.628 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:08.628 Initialization complete. Launching workers. 00:16:08.628 ======================================================== 00:16:08.628 Latency(us) 00:16:08.628 Device Information : IOPS MiB/s Average min max 00:16:08.628 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32075.57 125.30 3989.82 1235.01 7741.77 00:16:08.628 ======================================================== 00:16:08.628 Total : 32075.57 125.30 3989.82 1235.01 7741.77 00:16:08.628 00:16:08.628 18:02:57 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:08.628 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.628 [2024-04-15 18:02:57.363492] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:13.907 [2024-04-15 18:03:02.501211] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:13.907 Initializing NVMe Controllers 00:16:13.907 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:13.907 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:13.907 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:13.907 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:13.907 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:13.907 Initialization complete. Launching workers. 00:16:13.907 Starting thread on core 2 00:16:13.907 Starting thread on core 3 00:16:13.907 Starting thread on core 1 00:16:13.907 18:03:02 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:13.907 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.907 [2024-04-15 18:03:02.849086] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:17.192 [2024-04-15 18:03:05.929669] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:17.192 Initializing NVMe Controllers 00:16:17.192 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:17.192 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:17.192 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:17.193 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:17.193 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:17.193 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:17.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:17.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:17.193 Initialization complete. Launching workers. 00:16:17.193 Starting thread on core 1 with urgent priority queue 00:16:17.193 Starting thread on core 2 with urgent priority queue 00:16:17.193 Starting thread on core 3 with urgent priority queue 00:16:17.193 Starting thread on core 0 with urgent priority queue 00:16:17.193 SPDK bdev Controller (SPDK2 ) core 0: 5893.33 IO/s 16.97 secs/100000 ios 00:16:17.193 SPDK bdev Controller (SPDK2 ) core 1: 5633.33 IO/s 17.75 secs/100000 ios 00:16:17.193 SPDK bdev Controller (SPDK2 ) core 2: 5991.33 IO/s 16.69 secs/100000 ios 00:16:17.193 SPDK bdev Controller (SPDK2 ) core 3: 3928.00 IO/s 25.46 secs/100000 ios 00:16:17.193 ======================================================== 00:16:17.193 00:16:17.193 18:03:05 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:17.193 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.451 [2024-04-15 18:03:06.245550] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:17.451 [2024-04-15 18:03:06.254602] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:17.451 Initializing NVMe Controllers 00:16:17.451 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:17.451 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:17.451 Namespace ID: 1 size: 0GB 00:16:17.451 Initialization complete. 00:16:17.451 INFO: using host memory buffer for IO 00:16:17.451 Hello world! 00:16:17.451 18:03:06 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:17.451 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.709 [2024-04-15 18:03:06.542925] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:19.085 Initializing NVMe Controllers 00:16:19.085 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:19.085 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:19.085 Initialization complete. Launching workers. 00:16:19.085 submit (in ns) avg, min, max = 7466.1, 3447.8, 4016637.8 00:16:19.085 complete (in ns) avg, min, max = 24995.8, 2022.2, 4998888.9 00:16:19.085 00:16:19.085 Submit histogram 00:16:19.085 ================ 00:16:19.085 Range in us Cumulative Count 00:16:19.085 3.437 - 3.461: 0.0150% ( 2) 00:16:19.085 3.461 - 3.484: 0.6664% ( 87) 00:16:19.085 3.484 - 3.508: 3.3396% ( 357) 00:16:19.085 3.508 - 3.532: 6.7091% ( 450) 00:16:19.085 3.532 - 3.556: 13.3808% ( 891) 00:16:19.085 3.556 - 3.579: 24.5376% ( 1490) 00:16:19.085 3.579 - 3.603: 35.2303% ( 1428) 00:16:19.085 3.603 - 3.627: 43.8338% ( 1149) 00:16:19.085 3.627 - 3.650: 50.3332% ( 868) 00:16:19.085 3.650 - 3.674: 55.5298% ( 694) 00:16:19.085 3.674 - 3.698: 59.2512% ( 497) 00:16:19.085 3.698 - 3.721: 62.9876% ( 499) 00:16:19.085 3.721 - 3.745: 65.9453% ( 395) 00:16:19.085 3.745 - 3.769: 68.7982% ( 381) 00:16:19.085 3.769 - 3.793: 72.0329% ( 432) 00:16:19.085 3.793 - 3.816: 76.1438% ( 549) 00:16:19.086 3.816 - 3.840: 80.7413% ( 614) 00:16:19.086 3.840 - 3.864: 84.3130% ( 477) 00:16:19.086 3.864 - 3.887: 86.7091% ( 320) 00:16:19.086 3.887 - 3.911: 88.2890% ( 211) 00:16:19.086 3.911 - 3.935: 89.8690% ( 211) 00:16:19.086 3.935 - 3.959: 91.1868% ( 176) 00:16:19.086 3.959 - 3.982: 92.2201% ( 138) 00:16:19.086 3.982 - 4.006: 92.9390% ( 96) 00:16:19.086 4.006 - 4.030: 93.7477% ( 108) 00:16:19.086 4.030 - 4.053: 94.5638% ( 109) 00:16:19.086 4.053 - 4.077: 95.3051% ( 99) 00:16:19.086 4.077 - 4.101: 95.8817% ( 77) 00:16:19.086 4.101 - 4.124: 96.2112% ( 44) 00:16:19.086 4.124 - 4.148: 96.4283% ( 29) 00:16:19.086 4.148 - 4.172: 96.6455% ( 29) 00:16:19.086 4.172 - 4.196: 96.7877% ( 19) 00:16:19.086 4.196 - 4.219: 96.9300% ( 19) 00:16:19.086 4.219 - 4.243: 97.0049% ( 10) 00:16:19.086 4.243 - 4.267: 97.0947% ( 12) 00:16:19.086 4.267 - 4.290: 97.2070% ( 15) 00:16:19.086 4.290 - 4.314: 97.2969% ( 12) 00:16:19.086 4.314 - 4.338: 97.4092% ( 15) 00:16:19.086 4.338 - 4.361: 97.4392% ( 4) 00:16:19.086 4.361 - 4.385: 97.4916% ( 7) 00:16:19.086 4.385 - 4.409: 97.5365% ( 6) 00:16:19.086 4.409 - 4.433: 97.5889% ( 7) 00:16:19.086 4.480 - 4.504: 97.5964% ( 1) 00:16:19.086 4.504 - 4.527: 97.6039% ( 1) 00:16:19.086 4.599 - 4.622: 97.6114% ( 1) 00:16:19.086 4.622 - 4.646: 97.6189% ( 1) 00:16:19.086 4.646 - 4.670: 97.6488% ( 4) 00:16:19.086 4.670 - 4.693: 97.6638% ( 2) 00:16:19.086 4.693 - 4.717: 97.6937% ( 4) 00:16:19.086 4.717 - 4.741: 97.7537% ( 8) 00:16:19.086 4.741 - 4.764: 97.8136% ( 8) 00:16:19.086 4.764 - 4.788: 97.8735% ( 8) 00:16:19.086 4.788 - 4.812: 97.9633% ( 12) 00:16:19.086 4.812 - 4.836: 98.0007% ( 5) 00:16:19.086 4.836 - 4.859: 98.0307% ( 4) 00:16:19.086 4.859 - 4.883: 98.0681% ( 5) 00:16:19.086 4.883 - 4.907: 98.1206% ( 7) 00:16:19.086 4.907 - 4.930: 98.1430% ( 3) 00:16:19.086 4.954 - 4.978: 98.1655% ( 3) 00:16:19.086 4.978 - 5.001: 98.2179% ( 7) 00:16:19.086 5.001 - 5.025: 98.2628% ( 6) 00:16:19.086 5.025 - 5.049: 98.2928% ( 4) 00:16:19.086 5.049 - 5.073: 98.3302% ( 5) 00:16:19.086 5.073 - 5.096: 98.3602% ( 4) 00:16:19.086 5.096 - 5.120: 98.3826% ( 3) 00:16:19.086 5.120 - 5.144: 98.3976% ( 2) 00:16:19.086 5.167 - 5.191: 98.4126% ( 2) 00:16:19.086 5.191 - 5.215: 98.4201% ( 1) 00:16:19.086 5.239 - 5.262: 98.4350% ( 2) 00:16:19.086 5.262 - 5.286: 98.4500% ( 2) 00:16:19.086 5.310 - 5.333: 98.4575% ( 1) 00:16:19.086 5.381 - 5.404: 98.4650% ( 1) 00:16:19.086 5.476 - 5.499: 98.4725% ( 1) 00:16:19.086 5.570 - 5.594: 98.4800% ( 1) 00:16:19.086 5.641 - 5.665: 98.4949% ( 2) 00:16:19.086 5.807 - 5.831: 98.5024% ( 1) 00:16:19.086 5.926 - 5.950: 98.5099% ( 1) 00:16:19.086 6.068 - 6.116: 98.5174% ( 1) 00:16:19.086 6.732 - 6.779: 98.5324% ( 2) 00:16:19.086 6.921 - 6.969: 98.5399% ( 1) 00:16:19.086 7.111 - 7.159: 98.5474% ( 1) 00:16:19.086 7.253 - 7.301: 98.5548% ( 1) 00:16:19.086 7.633 - 7.680: 98.5623% ( 1) 00:16:19.086 7.727 - 7.775: 98.5698% ( 1) 00:16:19.086 7.917 - 7.964: 98.5848% ( 2) 00:16:19.086 8.059 - 8.107: 98.5998% ( 2) 00:16:19.086 8.201 - 8.249: 98.6073% ( 1) 00:16:19.086 8.296 - 8.344: 98.6148% ( 1) 00:16:19.086 8.391 - 8.439: 98.6222% ( 1) 00:16:19.086 8.486 - 8.533: 98.6297% ( 1) 00:16:19.086 8.581 - 8.628: 98.6447% ( 2) 00:16:19.086 8.770 - 8.818: 98.6522% ( 1) 00:16:19.086 8.818 - 8.865: 98.6597% ( 1) 00:16:19.086 8.913 - 8.960: 98.6672% ( 1) 00:16:19.086 9.197 - 9.244: 98.6821% ( 2) 00:16:19.086 9.244 - 9.292: 98.6896% ( 1) 00:16:19.086 9.529 - 9.576: 98.6971% ( 1) 00:16:19.086 9.576 - 9.624: 98.7121% ( 2) 00:16:19.086 9.624 - 9.671: 98.7196% ( 1) 00:16:19.086 9.766 - 9.813: 98.7271% ( 1) 00:16:19.086 9.908 - 9.956: 98.7570% ( 4) 00:16:19.086 10.003 - 10.050: 98.7645% ( 1) 00:16:19.086 10.050 - 10.098: 98.7795% ( 2) 00:16:19.086 10.145 - 10.193: 98.7870% ( 1) 00:16:19.086 10.193 - 10.240: 98.7945% ( 1) 00:16:19.086 10.382 - 10.430: 98.8019% ( 1) 00:16:19.086 10.714 - 10.761: 98.8094% ( 1) 00:16:19.086 10.809 - 10.856: 98.8169% ( 1) 00:16:19.086 10.999 - 11.046: 98.8244% ( 1) 00:16:19.086 11.046 - 11.093: 98.8319% ( 1) 00:16:19.086 11.141 - 11.188: 98.8394% ( 1) 00:16:19.086 11.188 - 11.236: 98.8618% ( 3) 00:16:19.086 11.425 - 11.473: 98.8768% ( 2) 00:16:19.086 11.804 - 11.852: 98.8843% ( 1) 00:16:19.086 12.089 - 12.136: 98.8918% ( 1) 00:16:19.086 12.421 - 12.516: 98.9068% ( 2) 00:16:19.086 12.516 - 12.610: 98.9143% ( 1) 00:16:19.086 12.610 - 12.705: 98.9292% ( 2) 00:16:19.086 12.990 - 13.084: 98.9367% ( 1) 00:16:19.086 13.559 - 13.653: 98.9517% ( 2) 00:16:19.086 14.127 - 14.222: 98.9592% ( 1) 00:16:19.086 14.222 - 14.317: 98.9667% ( 1) 00:16:19.086 14.412 - 14.507: 98.9817% ( 2) 00:16:19.086 14.601 - 14.696: 98.9891% ( 1) 00:16:19.086 14.696 - 14.791: 98.9966% ( 1) 00:16:19.086 14.886 - 14.981: 99.0041% ( 1) 00:16:19.086 14.981 - 15.076: 99.0116% ( 1) 00:16:19.086 17.067 - 17.161: 99.0266% ( 2) 00:16:19.086 17.351 - 17.446: 99.0490% ( 3) 00:16:19.086 17.446 - 17.541: 99.0865% ( 5) 00:16:19.086 17.541 - 17.636: 99.1015% ( 2) 00:16:19.086 17.636 - 17.730: 99.1614% ( 8) 00:16:19.086 17.730 - 17.825: 99.2288% ( 9) 00:16:19.086 17.825 - 17.920: 99.2887% ( 8) 00:16:19.086 17.920 - 18.015: 99.3560% ( 9) 00:16:19.086 18.015 - 18.110: 99.4085% ( 7) 00:16:19.086 18.110 - 18.204: 99.4609% ( 7) 00:16:19.086 18.204 - 18.299: 99.5133% ( 7) 00:16:19.086 18.299 - 18.394: 99.6031% ( 12) 00:16:19.086 18.394 - 18.489: 99.6780% ( 10) 00:16:19.086 18.489 - 18.584: 99.7230% ( 6) 00:16:19.086 18.584 - 18.679: 99.7679% ( 6) 00:16:19.086 18.679 - 18.773: 99.7903% ( 3) 00:16:19.086 18.773 - 18.868: 99.8053% ( 2) 00:16:19.086 18.868 - 18.963: 99.8203% ( 2) 00:16:19.086 18.963 - 19.058: 99.8353% ( 2) 00:16:19.086 19.058 - 19.153: 99.8502% ( 2) 00:16:19.086 19.153 - 19.247: 99.8577% ( 1) 00:16:19.086 19.342 - 19.437: 99.8652% ( 1) 00:16:19.086 19.437 - 19.532: 99.8727% ( 1) 00:16:19.086 19.816 - 19.911: 99.8802% ( 1) 00:16:19.086 20.101 - 20.196: 99.8877% ( 1) 00:16:19.086 21.049 - 21.144: 99.8952% ( 1) 00:16:19.086 21.523 - 21.618: 99.9027% ( 1) 00:16:19.086 24.462 - 24.652: 99.9101% ( 1) 00:16:19.086 3980.705 - 4004.978: 99.9850% ( 10) 00:16:19.086 4004.978 - 4029.250: 100.0000% ( 2) 00:16:19.086 00:16:19.087 Complete histogram 00:16:19.087 ================== 00:16:19.087 Range in us Cumulative Count 00:16:19.087 2.015 - 2.027: 0.1647% ( 22) 00:16:19.087 2.027 - 2.039: 8.6410% ( 1132) 00:16:19.087 2.039 - 2.050: 12.6769% ( 539) 00:16:19.087 2.050 - 2.062: 18.3602% ( 759) 00:16:19.087 2.062 - 2.074: 52.5721% ( 4569) 00:16:19.087 2.074 - 2.086: 60.4193% ( 1048) 00:16:19.087 2.086 - 2.098: 63.1149% ( 360) 00:16:19.087 2.098 - 2.110: 67.1359% ( 537) 00:16:19.087 2.110 - 2.121: 67.7873% ( 87) 00:16:19.087 2.121 - 2.133: 72.6020% ( 643) 00:16:19.087 2.133 - 2.145: 80.5766% ( 1065) 00:16:19.087 2.145 - 2.157: 82.2314% ( 221) 00:16:19.087 2.157 - 2.169: 83.1299% ( 120) 00:16:19.087 2.169 - 2.181: 84.3280% ( 160) 00:16:19.087 2.181 - 2.193: 85.1292% ( 107) 00:16:19.087 2.193 - 2.204: 87.3081% ( 291) 00:16:19.087 2.204 - 2.216: 91.8832% ( 611) 00:16:19.087 2.216 - 2.228: 93.3658% ( 198) 00:16:19.087 2.228 - 2.240: 93.8974% ( 71) 00:16:19.087 2.240 - 2.252: 94.3841% ( 65) 00:16:19.087 2.252 - 2.264: 94.6986% ( 42) 00:16:19.087 2.264 - 2.276: 94.9307% ( 31) 00:16:19.087 2.276 - 2.287: 95.2902% ( 48) 00:16:19.087 2.287 - 2.299: 95.5897% ( 40) 00:16:19.087 2.299 - 2.311: 95.7319% ( 19) 00:16:19.087 2.311 - 2.323: 95.8143% ( 11) 00:16:19.087 2.323 - 2.335: 95.9566% ( 19) 00:16:19.087 2.335 - 2.347: 96.0988% ( 19) 00:16:19.087 2.347 - 2.359: 96.3160% ( 29) 00:16:19.087 2.359 - 2.370: 96.6230% ( 41) 00:16:19.087 2.370 - 2.382: 96.8551% ( 31) 00:16:19.087 2.382 - 2.394: 96.9974% ( 19) 00:16:19.087 2.394 - 2.406: 97.1846% ( 25) 00:16:19.087 2.406 - 2.418: 97.3268% ( 19) 00:16:19.087 2.418 - 2.430: 97.5140% ( 25) 00:16:19.087 2.430 - 2.441: 97.7087% ( 26) 00:16:19.087 2.441 - 2.453: 97.8210% ( 15) 00:16:19.087 2.453 - 2.465: 97.9259% ( 14) 00:16:19.087 2.465 - 2.477: 98.0382% ( 15) 00:16:19.087 2.477 - 2.489: 98.1131% ( 10) 00:16:19.087 2.489 - 2.501: 98.1954% ( 11) 00:16:19.087 2.501 - 2.513: 98.2179% ( 3) 00:16:19.087 2.513 - 2.524: 98.2478% ( 4) 00:16:19.087 2.524 - 2.536: 98.3003% ( 7) 00:16:19.087 2.536 - 2.548: 98.3227% ( 3) 00:16:19.087 2.548 - 2.560: 98.3452% ( 3) 00:16:19.087 2.560 - 2.572: 98.3527% ( 1) 00:16:19.087 2.572 - 2.584: 98.3602% ( 1) 00:16:19.087 2.584 - 2.596: 98.3751% ( 2) 00:16:19.087 2.667 - 2.679: 98.3826% ( 1) 00:16:19.087 2.726 - 2.738: 98.3976% ( 2) 00:16:19.087 2.785 - 2.797: 98.4051% ( 1) 00:16:19.087 2.809 - 2.821: 98.4126% ( 1) 00:16:19.087 2.821 - 2.833: 98.4201% ( 1) 00:16:19.087 2.880 - 2.892: 98.4276% ( 1) 00:16:19.087 2.999 - 3.010: 98.4350% ( 1) 00:16:19.087 3.247 - 3.271: 98.4425% ( 1) 00:16:19.087 3.390 - 3.413: 98.4500% ( 1) 00:16:19.087 3.413 - 3.437: 98.4875% ( 5) 00:16:19.087 3.461 - 3.484: 98.4949% ( 1) 00:16:19.087 3.484 - 3.508: 98.5174% ( 3) 00:16:19.087 3.508 - 3.532: 98.5399% ( 3) 00:16:19.087 3.532 - 3.556: 98.5474% ( 1) 00:16:19.087 3.556 - 3.579: 98.5548% ( 1) 00:16:19.087 3.627 - 3.650: 98.5698% ( 2) 00:16:19.087 3.650 - 3.674: 98.5773% ( 1) 00:16:19.087 3.674 - 3.698: 98.5923% ( 2) 00:16:19.087 3.721 - 3.745: 98.5998% ( 1) 00:16:19.087 3.745 - 3.769: 98.6073% ( 1) 00:16:19.087 3.816 - 3.840: 98.6297% ( 3) 00:16:19.087 3.887 - 3.911: 98.6372% ( 1) 00:16:19.087 3.911 - 3.935: 98.6447% ( 1) 00:16:19.087 3.959 - 3.982: 98.6522% ( 1) 00:16:19.087 3.982 - 4.006: 98.6597% ( 1) 00:16:19.087 4.219 - 4.243: 98.6672% ( 1) 00:16:19.087 6.116 - 6.163: 98.6747% ( 1) 00:16:19.087 6.258 - 6.305: 98.6821% ( 1) 00:16:19.087 6.305 - 6.353: 98.6896% ( 1) 00:16:19.087 6.447 - 6.495: 98.6971% ( 1) 00:16:19.087 6.542 - 6.590: 98.7046% ( 1) 00:16:19.087 6.779 - 6.827: 98.7196% ( 2) 00:16:19.087 6.921 - 6.969: 98.7271% ( 1) 00:16:19.087 7.111 - 7.159: 98.7346% ( 1) 00:16:19.087 7.348 - 7.396: 98.7420% ( 1) 00:16:19.087 7.538 - 7.585: 98.7495% ( 1) 00:16:19.087 7.775 - 7.822: 98.7570% ( 1) 00:16:19.087 7.822 - 7.870: 98.7645% ( 1) 00:16:19.087 7.917 - 7.964: 98.7795% ( 2) 00:16:19.087 8.059 - 8.107: 98.7945% ( 2) 00:16:19.087 8.249 - 8.296: 98.8019% ( 1) 00:16:19.087 8.391 - 8.439: 98.8094% ( 1) 00:16:19.087 8.439 - 8.486: 98.8169% ( 1) 00:16:19.087 8.723 - 8.770: 98.8244% ( 1) 00:16:19.087 9.292 - 9.339: 98.8319% ( 1) 00:16:19.087 9.481 - 9.529: 9[2024-04-15 18:03:07.636843] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:19.087 8.8394% ( 1) 00:16:19.087 9.956 - 10.003: 98.8469% ( 1) 00:16:19.087 10.098 - 10.145: 98.8544% ( 1) 00:16:19.087 10.145 - 10.193: 98.8618% ( 1) 00:16:19.087 11.662 - 11.710: 98.8693% ( 1) 00:16:19.087 11.852 - 11.899: 98.8768% ( 1) 00:16:19.087 15.644 - 15.739: 98.8918% ( 2) 00:16:19.087 15.739 - 15.834: 98.9068% ( 2) 00:16:19.087 15.834 - 15.929: 98.9143% ( 1) 00:16:19.087 15.929 - 16.024: 98.9592% ( 6) 00:16:19.087 16.024 - 16.119: 99.0041% ( 6) 00:16:19.087 16.119 - 16.213: 99.0565% ( 7) 00:16:19.087 16.213 - 16.308: 99.1089% ( 7) 00:16:19.087 16.308 - 16.403: 99.1239% ( 2) 00:16:19.087 16.403 - 16.498: 99.1389% ( 2) 00:16:19.087 16.498 - 16.593: 99.1913% ( 7) 00:16:19.087 16.593 - 16.687: 99.2138% ( 3) 00:16:19.087 16.687 - 16.782: 99.2662% ( 7) 00:16:19.087 16.782 - 16.877: 99.3411% ( 10) 00:16:19.087 16.972 - 17.067: 99.3560% ( 2) 00:16:19.087 17.161 - 17.256: 99.3635% ( 1) 00:16:19.087 17.256 - 17.351: 99.3710% ( 1) 00:16:19.087 17.351 - 17.446: 99.3860% ( 2) 00:16:19.087 17.446 - 17.541: 99.3935% ( 1) 00:16:19.087 17.825 - 17.920: 99.4010% ( 1) 00:16:19.087 18.110 - 18.204: 99.4085% ( 1) 00:16:19.087 18.204 - 18.299: 99.4159% ( 1) 00:16:19.087 18.584 - 18.679: 99.4234% ( 1) 00:16:19.087 24.652 - 24.841: 99.4309% ( 1) 00:16:19.087 3155.437 - 3179.710: 99.4384% ( 1) 00:16:19.087 3980.705 - 4004.978: 99.8727% ( 58) 00:16:19.087 4004.978 - 4029.250: 99.9850% ( 15) 00:16:19.087 4102.068 - 4126.341: 99.9925% ( 1) 00:16:19.087 4975.881 - 5000.154: 100.0000% ( 1) 00:16:19.087 00:16:19.087 18:03:07 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:19.087 18:03:07 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:19.087 18:03:07 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:19.087 18:03:07 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:19.087 18:03:07 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:19.087 [ 00:16:19.087 { 00:16:19.087 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:19.087 "subtype": "Discovery", 00:16:19.087 "listen_addresses": [], 00:16:19.087 "allow_any_host": true, 00:16:19.087 "hosts": [] 00:16:19.087 }, 00:16:19.087 { 00:16:19.087 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:19.087 "subtype": "NVMe", 00:16:19.087 "listen_addresses": [ 00:16:19.087 { 00:16:19.087 "transport": "VFIOUSER", 00:16:19.087 "trtype": "VFIOUSER", 00:16:19.087 "adrfam": "IPv4", 00:16:19.087 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:19.087 "trsvcid": "0" 00:16:19.087 } 00:16:19.087 ], 00:16:19.087 "allow_any_host": true, 00:16:19.087 "hosts": [], 00:16:19.087 "serial_number": "SPDK1", 00:16:19.087 "model_number": "SPDK bdev Controller", 00:16:19.087 "max_namespaces": 32, 00:16:19.087 "min_cntlid": 1, 00:16:19.087 "max_cntlid": 65519, 00:16:19.087 "namespaces": [ 00:16:19.087 { 00:16:19.087 "nsid": 1, 00:16:19.087 "bdev_name": "Malloc1", 00:16:19.087 "name": "Malloc1", 00:16:19.087 "nguid": "B1E8891739D34F4EAD19DBCE9FC1B417", 00:16:19.087 "uuid": "b1e88917-39d3-4f4e-ad19-dbce9fc1b417" 00:16:19.087 }, 00:16:19.087 { 00:16:19.087 "nsid": 2, 00:16:19.087 "bdev_name": "Malloc3", 00:16:19.087 "name": "Malloc3", 00:16:19.087 "nguid": "AA8D346381E5432690061E7F029E9345", 00:16:19.087 "uuid": "aa8d3463-81e5-4326-9006-1e7f029e9345" 00:16:19.087 } 00:16:19.087 ] 00:16:19.087 }, 00:16:19.087 { 00:16:19.087 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:19.087 "subtype": "NVMe", 00:16:19.087 "listen_addresses": [ 00:16:19.087 { 00:16:19.087 "transport": "VFIOUSER", 00:16:19.087 "trtype": "VFIOUSER", 00:16:19.087 "adrfam": "IPv4", 00:16:19.087 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:19.087 "trsvcid": "0" 00:16:19.087 } 00:16:19.087 ], 00:16:19.087 "allow_any_host": true, 00:16:19.087 "hosts": [], 00:16:19.087 "serial_number": "SPDK2", 00:16:19.087 "model_number": "SPDK bdev Controller", 00:16:19.088 "max_namespaces": 32, 00:16:19.088 "min_cntlid": 1, 00:16:19.088 "max_cntlid": 65519, 00:16:19.088 "namespaces": [ 00:16:19.088 { 00:16:19.088 "nsid": 1, 00:16:19.088 "bdev_name": "Malloc2", 00:16:19.088 "name": "Malloc2", 00:16:19.088 "nguid": "DB8831BB3C944E26A3836AFE4988445B", 00:16:19.088 "uuid": "db8831bb-3c94-4e26-a383-6afe4988445b" 00:16:19.088 } 00:16:19.088 ] 00:16:19.088 } 00:16:19.088 ] 00:16:19.088 18:03:08 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:19.088 18:03:08 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3297752 00:16:19.088 18:03:08 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:19.088 18:03:08 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:19.088 18:03:08 -- common/autotest_common.sh@1251 -- # local i=0 00:16:19.088 18:03:08 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:19.088 18:03:08 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:19.088 18:03:08 -- common/autotest_common.sh@1262 -- # return 0 00:16:19.088 18:03:08 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:19.088 18:03:08 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:19.347 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.347 [2024-04-15 18:03:08.177486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:19.605 Malloc4 00:16:19.605 18:03:08 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:19.863 [2024-04-15 18:03:08.634942] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:19.863 18:03:08 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:19.863 Asynchronous Event Request test 00:16:19.863 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:19.863 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:19.863 Registering asynchronous event callbacks... 00:16:19.863 Starting namespace attribute notice tests for all controllers... 00:16:19.863 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:19.863 aer_cb - Changed Namespace 00:16:19.863 Cleaning up... 00:16:20.122 [ 00:16:20.122 { 00:16:20.122 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:20.122 "subtype": "Discovery", 00:16:20.122 "listen_addresses": [], 00:16:20.122 "allow_any_host": true, 00:16:20.122 "hosts": [] 00:16:20.122 }, 00:16:20.122 { 00:16:20.122 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:20.122 "subtype": "NVMe", 00:16:20.122 "listen_addresses": [ 00:16:20.122 { 00:16:20.122 "transport": "VFIOUSER", 00:16:20.122 "trtype": "VFIOUSER", 00:16:20.122 "adrfam": "IPv4", 00:16:20.122 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:20.122 "trsvcid": "0" 00:16:20.122 } 00:16:20.122 ], 00:16:20.122 "allow_any_host": true, 00:16:20.122 "hosts": [], 00:16:20.122 "serial_number": "SPDK1", 00:16:20.122 "model_number": "SPDK bdev Controller", 00:16:20.122 "max_namespaces": 32, 00:16:20.122 "min_cntlid": 1, 00:16:20.122 "max_cntlid": 65519, 00:16:20.122 "namespaces": [ 00:16:20.122 { 00:16:20.122 "nsid": 1, 00:16:20.122 "bdev_name": "Malloc1", 00:16:20.122 "name": "Malloc1", 00:16:20.122 "nguid": "B1E8891739D34F4EAD19DBCE9FC1B417", 00:16:20.122 "uuid": "b1e88917-39d3-4f4e-ad19-dbce9fc1b417" 00:16:20.122 }, 00:16:20.122 { 00:16:20.122 "nsid": 2, 00:16:20.122 "bdev_name": "Malloc3", 00:16:20.122 "name": "Malloc3", 00:16:20.122 "nguid": "AA8D346381E5432690061E7F029E9345", 00:16:20.122 "uuid": "aa8d3463-81e5-4326-9006-1e7f029e9345" 00:16:20.122 } 00:16:20.122 ] 00:16:20.122 }, 00:16:20.122 { 00:16:20.122 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:20.122 "subtype": "NVMe", 00:16:20.122 "listen_addresses": [ 00:16:20.122 { 00:16:20.122 "transport": "VFIOUSER", 00:16:20.122 "trtype": "VFIOUSER", 00:16:20.122 "adrfam": "IPv4", 00:16:20.122 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:20.122 "trsvcid": "0" 00:16:20.122 } 00:16:20.122 ], 00:16:20.122 "allow_any_host": true, 00:16:20.122 "hosts": [], 00:16:20.122 "serial_number": "SPDK2", 00:16:20.122 "model_number": "SPDK bdev Controller", 00:16:20.122 "max_namespaces": 32, 00:16:20.122 "min_cntlid": 1, 00:16:20.122 "max_cntlid": 65519, 00:16:20.122 "namespaces": [ 00:16:20.122 { 00:16:20.122 "nsid": 1, 00:16:20.122 "bdev_name": "Malloc2", 00:16:20.122 "name": "Malloc2", 00:16:20.122 "nguid": "DB8831BB3C944E26A3836AFE4988445B", 00:16:20.122 "uuid": "db8831bb-3c94-4e26-a383-6afe4988445b" 00:16:20.122 }, 00:16:20.122 { 00:16:20.122 "nsid": 2, 00:16:20.122 "bdev_name": "Malloc4", 00:16:20.122 "name": "Malloc4", 00:16:20.122 "nguid": "45345EE3D3634C66A6946574A014D649", 00:16:20.122 "uuid": "45345ee3-d363-4c66-a694-6574a014d649" 00:16:20.122 } 00:16:20.122 ] 00:16:20.122 } 00:16:20.122 ] 00:16:20.122 18:03:08 -- target/nvmf_vfio_user.sh@44 -- # wait 3297752 00:16:20.123 18:03:08 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:20.123 18:03:08 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3291809 00:16:20.123 18:03:08 -- common/autotest_common.sh@936 -- # '[' -z 3291809 ']' 00:16:20.123 18:03:08 -- common/autotest_common.sh@940 -- # kill -0 3291809 00:16:20.123 18:03:08 -- common/autotest_common.sh@941 -- # uname 00:16:20.123 18:03:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:20.123 18:03:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3291809 00:16:20.123 18:03:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:20.123 18:03:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:20.123 18:03:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3291809' 00:16:20.123 killing process with pid 3291809 00:16:20.123 18:03:09 -- common/autotest_common.sh@955 -- # kill 3291809 00:16:20.123 [2024-04-15 18:03:09.017678] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:20.123 18:03:09 -- common/autotest_common.sh@960 -- # wait 3291809 00:16:20.690 18:03:09 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:20.690 18:03:09 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:20.690 18:03:09 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:20.690 18:03:09 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:20.690 18:03:09 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:20.690 18:03:09 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3298156 00:16:20.690 18:03:09 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3298156' 00:16:20.690 Process pid: 3298156 00:16:20.690 18:03:09 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:20.690 18:03:09 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3298156 00:16:20.690 18:03:09 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:20.690 18:03:09 -- common/autotest_common.sh@817 -- # '[' -z 3298156 ']' 00:16:20.690 18:03:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.690 18:03:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:20.690 18:03:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.690 18:03:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:20.690 18:03:09 -- common/autotest_common.sh@10 -- # set +x 00:16:20.690 [2024-04-15 18:03:09.390026] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:20.690 [2024-04-15 18:03:09.391125] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:16:20.690 [2024-04-15 18:03:09.391180] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.690 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.690 [2024-04-15 18:03:09.455770] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:20.691 [2024-04-15 18:03:09.544965] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.691 [2024-04-15 18:03:09.545023] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.691 [2024-04-15 18:03:09.545038] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.691 [2024-04-15 18:03:09.545050] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.691 [2024-04-15 18:03:09.545071] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.691 [2024-04-15 18:03:09.545144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.691 [2024-04-15 18:03:09.545170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.691 [2024-04-15 18:03:09.545220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:20.691 [2024-04-15 18:03:09.545223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.949 [2024-04-15 18:03:09.652155] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:16:20.949 [2024-04-15 18:03:09.652356] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:16:20.949 [2024-04-15 18:03:09.652634] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:16:20.949 [2024-04-15 18:03:09.653327] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:20.949 [2024-04-15 18:03:09.653464] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:16:20.949 18:03:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:20.949 18:03:09 -- common/autotest_common.sh@850 -- # return 0 00:16:20.949 18:03:09 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:21.883 18:03:10 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:22.141 18:03:10 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:22.141 18:03:10 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:22.141 18:03:10 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:22.141 18:03:10 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:22.141 18:03:10 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:22.399 Malloc1 00:16:22.399 18:03:11 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:22.965 18:03:11 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:22.965 18:03:11 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:23.530 18:03:12 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:23.530 18:03:12 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:23.530 18:03:12 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:23.787 Malloc2 00:16:23.787 18:03:12 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:24.353 18:03:13 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:24.613 18:03:13 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:24.932 18:03:13 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:24.932 18:03:13 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3298156 00:16:24.932 18:03:13 -- common/autotest_common.sh@936 -- # '[' -z 3298156 ']' 00:16:24.932 18:03:13 -- common/autotest_common.sh@940 -- # kill -0 3298156 00:16:24.932 18:03:13 -- common/autotest_common.sh@941 -- # uname 00:16:24.932 18:03:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:24.932 18:03:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3298156 00:16:24.932 18:03:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:24.932 18:03:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:24.932 18:03:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3298156' 00:16:24.932 killing process with pid 3298156 00:16:24.932 18:03:13 -- common/autotest_common.sh@955 -- # kill 3298156 00:16:24.932 18:03:13 -- common/autotest_common.sh@960 -- # wait 3298156 00:16:25.501 18:03:14 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:25.501 18:03:14 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:25.501 00:16:25.501 real 0m55.107s 00:16:25.501 user 3m38.944s 00:16:25.501 sys 0m5.024s 00:16:25.501 18:03:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:25.501 18:03:14 -- common/autotest_common.sh@10 -- # set +x 00:16:25.501 ************************************ 00:16:25.501 END TEST nvmf_vfio_user 00:16:25.501 ************************************ 00:16:25.501 18:03:14 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:25.501 18:03:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:25.501 18:03:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:25.501 18:03:14 -- common/autotest_common.sh@10 -- # set +x 00:16:25.501 ************************************ 00:16:25.501 START TEST nvmf_vfio_user_nvme_compliance 00:16:25.501 ************************************ 00:16:25.501 18:03:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:25.501 * Looking for test storage... 00:16:25.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:25.501 18:03:14 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.501 18:03:14 -- nvmf/common.sh@7 -- # uname -s 00:16:25.501 18:03:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.501 18:03:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.501 18:03:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.501 18:03:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.501 18:03:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.501 18:03:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.501 18:03:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.501 18:03:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.501 18:03:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.501 18:03:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.501 18:03:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:25.501 18:03:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:25.501 18:03:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.501 18:03:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.501 18:03:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.501 18:03:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.501 18:03:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:25.501 18:03:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.501 18:03:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.501 18:03:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.502 18:03:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.502 18:03:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.502 18:03:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.502 18:03:14 -- paths/export.sh@5 -- # export PATH 00:16:25.502 18:03:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.502 18:03:14 -- nvmf/common.sh@47 -- # : 0 00:16:25.502 18:03:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:25.502 18:03:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:25.502 18:03:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.502 18:03:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.502 18:03:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.502 18:03:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:25.502 18:03:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:25.502 18:03:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:25.502 18:03:14 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:25.502 18:03:14 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:25.502 18:03:14 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:25.502 18:03:14 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:25.502 18:03:14 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:25.502 18:03:14 -- compliance/compliance.sh@20 -- # nvmfpid=3298863 00:16:25.502 18:03:14 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:25.502 18:03:14 -- compliance/compliance.sh@21 -- # echo 'Process pid: 3298863' 00:16:25.502 Process pid: 3298863 00:16:25.502 18:03:14 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:25.502 18:03:14 -- compliance/compliance.sh@24 -- # waitforlisten 3298863 00:16:25.502 18:03:14 -- common/autotest_common.sh@817 -- # '[' -z 3298863 ']' 00:16:25.502 18:03:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.502 18:03:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:25.502 18:03:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.502 18:03:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:25.502 18:03:14 -- common/autotest_common.sh@10 -- # set +x 00:16:25.502 [2024-04-15 18:03:14.449884] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:16:25.502 [2024-04-15 18:03:14.449977] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.762 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.762 [2024-04-15 18:03:14.523648] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:25.762 [2024-04-15 18:03:14.621686] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.762 [2024-04-15 18:03:14.621744] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.762 [2024-04-15 18:03:14.621762] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.762 [2024-04-15 18:03:14.621776] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.762 [2024-04-15 18:03:14.621789] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.762 [2024-04-15 18:03:14.621880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.762 [2024-04-15 18:03:14.621947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.762 [2024-04-15 18:03:14.621950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.021 18:03:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:26.021 18:03:14 -- common/autotest_common.sh@850 -- # return 0 00:16:26.021 18:03:14 -- compliance/compliance.sh@26 -- # sleep 1 00:16:26.961 18:03:15 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:26.961 18:03:15 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:26.961 18:03:15 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:26.961 18:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.961 18:03:15 -- common/autotest_common.sh@10 -- # set +x 00:16:26.961 18:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.961 18:03:15 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:26.961 18:03:15 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:26.961 18:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.961 18:03:15 -- common/autotest_common.sh@10 -- # set +x 00:16:26.961 malloc0 00:16:26.961 18:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.961 18:03:15 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:26.961 18:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.961 18:03:15 -- common/autotest_common.sh@10 -- # set +x 00:16:26.961 18:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.961 18:03:15 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:26.961 18:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.961 18:03:15 -- common/autotest_common.sh@10 -- # set +x 00:16:26.961 18:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.961 18:03:15 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:26.961 18:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.961 18:03:15 -- common/autotest_common.sh@10 -- # set +x 00:16:26.961 18:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.961 18:03:15 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:26.961 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.219 00:16:27.219 00:16:27.219 CUnit - A unit testing framework for C - Version 2.1-3 00:16:27.219 http://cunit.sourceforge.net/ 00:16:27.219 00:16:27.219 00:16:27.219 Suite: nvme_compliance 00:16:27.219 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-15 18:03:16.001854] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.219 [2024-04-15 18:03:16.003327] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:27.219 [2024-04-15 18:03:16.003354] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:27.219 [2024-04-15 18:03:16.003366] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:27.219 [2024-04-15 18:03:16.004876] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.219 passed 00:16:27.219 Test: admin_identify_ctrlr_verify_fused ...[2024-04-15 18:03:16.090458] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.219 [2024-04-15 18:03:16.093472] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.219 passed 00:16:27.479 Test: admin_identify_ns ...[2024-04-15 18:03:16.177600] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.479 [2024-04-15 18:03:16.238073] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:27.479 [2024-04-15 18:03:16.246078] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:27.479 [2024-04-15 18:03:16.267186] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.479 passed 00:16:27.479 Test: admin_get_features_mandatory_features ...[2024-04-15 18:03:16.350833] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.479 [2024-04-15 18:03:16.353859] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.479 passed 00:16:27.739 Test: admin_get_features_optional_features ...[2024-04-15 18:03:16.438410] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.739 [2024-04-15 18:03:16.441428] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.739 passed 00:16:27.739 Test: admin_set_features_number_of_queues ...[2024-04-15 18:03:16.523549] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.739 [2024-04-15 18:03:16.628193] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.739 passed 00:16:27.999 Test: admin_get_log_page_mandatory_logs ...[2024-04-15 18:03:16.711777] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.999 [2024-04-15 18:03:16.714804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.999 passed 00:16:27.999 Test: admin_get_log_page_with_lpo ...[2024-04-15 18:03:16.798635] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.999 [2024-04-15 18:03:16.867090] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:27.999 [2024-04-15 18:03:16.880169] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.999 passed 00:16:28.259 Test: fabric_property_get ...[2024-04-15 18:03:16.963828] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.259 [2024-04-15 18:03:16.965133] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:28.259 [2024-04-15 18:03:16.966852] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.259 passed 00:16:28.259 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-15 18:03:17.048401] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.259 [2024-04-15 18:03:17.049651] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:28.259 [2024-04-15 18:03:17.051434] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.259 passed 00:16:28.259 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-15 18:03:17.136492] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.518 [2024-04-15 18:03:17.220067] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:28.518 [2024-04-15 18:03:17.236069] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:28.518 [2024-04-15 18:03:17.241174] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.518 passed 00:16:28.518 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-15 18:03:17.324800] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.518 [2024-04-15 18:03:17.326121] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:28.518 [2024-04-15 18:03:17.327818] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.518 passed 00:16:28.518 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-15 18:03:17.408911] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.776 [2024-04-15 18:03:17.484069] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:28.776 [2024-04-15 18:03:17.508067] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:28.776 [2024-04-15 18:03:17.513172] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.776 passed 00:16:28.776 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-15 18:03:17.597258] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.776 [2024-04-15 18:03:17.598555] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:28.776 [2024-04-15 18:03:17.598606] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:28.776 [2024-04-15 18:03:17.600285] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.776 passed 00:16:28.776 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-15 18:03:17.682684] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.034 [2024-04-15 18:03:17.775072] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:29.034 [2024-04-15 18:03:17.783081] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:29.034 [2024-04-15 18:03:17.791097] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:29.034 [2024-04-15 18:03:17.799072] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:29.034 [2024-04-15 18:03:17.828177] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.034 passed 00:16:29.034 Test: admin_create_io_sq_verify_pc ...[2024-04-15 18:03:17.911835] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.035 [2024-04-15 18:03:17.928100] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:29.035 [2024-04-15 18:03:17.945190] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.035 passed 00:16:29.294 Test: admin_create_io_qp_max_qps ...[2024-04-15 18:03:18.027742] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.233 [2024-04-15 18:03:19.136078] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:30.800 [2024-04-15 18:03:19.511147] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.800 passed 00:16:30.800 Test: admin_create_io_sq_shared_cq ...[2024-04-15 18:03:19.595583] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.800 [2024-04-15 18:03:19.727067] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:31.058 [2024-04-15 18:03:19.764173] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:31.058 passed 00:16:31.058 00:16:31.058 Run Summary: Type Total Ran Passed Failed Inactive 00:16:31.058 suites 1 1 n/a 0 0 00:16:31.058 tests 18 18 18 0 0 00:16:31.058 asserts 360 360 360 0 n/a 00:16:31.058 00:16:31.058 Elapsed time = 1.560 seconds 00:16:31.058 18:03:19 -- compliance/compliance.sh@42 -- # killprocess 3298863 00:16:31.058 18:03:19 -- common/autotest_common.sh@936 -- # '[' -z 3298863 ']' 00:16:31.058 18:03:19 -- common/autotest_common.sh@940 -- # kill -0 3298863 00:16:31.058 18:03:19 -- common/autotest_common.sh@941 -- # uname 00:16:31.058 18:03:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:31.058 18:03:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3298863 00:16:31.058 18:03:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:31.058 18:03:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:31.058 18:03:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3298863' 00:16:31.058 killing process with pid 3298863 00:16:31.059 18:03:19 -- common/autotest_common.sh@955 -- # kill 3298863 00:16:31.059 18:03:19 -- common/autotest_common.sh@960 -- # wait 3298863 00:16:31.316 18:03:20 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:31.316 18:03:20 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:31.316 00:16:31.316 real 0m5.805s 00:16:31.316 user 0m16.316s 00:16:31.316 sys 0m0.604s 00:16:31.316 18:03:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:31.316 18:03:20 -- common/autotest_common.sh@10 -- # set +x 00:16:31.316 ************************************ 00:16:31.316 END TEST nvmf_vfio_user_nvme_compliance 00:16:31.316 ************************************ 00:16:31.316 18:03:20 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:31.316 18:03:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:31.316 18:03:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:31.316 18:03:20 -- common/autotest_common.sh@10 -- # set +x 00:16:31.316 ************************************ 00:16:31.316 START TEST nvmf_vfio_user_fuzz 00:16:31.316 ************************************ 00:16:31.316 18:03:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:31.575 * Looking for test storage... 00:16:31.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.575 18:03:20 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.575 18:03:20 -- nvmf/common.sh@7 -- # uname -s 00:16:31.575 18:03:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.575 18:03:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.575 18:03:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.575 18:03:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.575 18:03:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.575 18:03:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.575 18:03:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.575 18:03:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.575 18:03:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.575 18:03:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.575 18:03:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:31.575 18:03:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:31.575 18:03:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.575 18:03:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.575 18:03:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.575 18:03:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.575 18:03:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.575 18:03:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.575 18:03:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.575 18:03:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.575 18:03:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.575 18:03:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.575 18:03:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.575 18:03:20 -- paths/export.sh@5 -- # export PATH 00:16:31.575 18:03:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.575 18:03:20 -- nvmf/common.sh@47 -- # : 0 00:16:31.575 18:03:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:31.575 18:03:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:31.575 18:03:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.575 18:03:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.575 18:03:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.575 18:03:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:31.575 18:03:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:31.575 18:03:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:31.575 18:03:20 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:31.575 18:03:20 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:31.575 18:03:20 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:31.575 18:03:20 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:31.575 18:03:20 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:31.575 18:03:20 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:31.575 18:03:20 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:31.575 18:03:20 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3299694 00:16:31.575 18:03:20 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:31.575 18:03:20 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3299694' 00:16:31.575 Process pid: 3299694 00:16:31.575 18:03:20 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:31.575 18:03:20 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3299694 00:16:31.575 18:03:20 -- common/autotest_common.sh@817 -- # '[' -z 3299694 ']' 00:16:31.575 18:03:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.575 18:03:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:31.575 18:03:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.575 18:03:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:31.575 18:03:20 -- common/autotest_common.sh@10 -- # set +x 00:16:31.834 18:03:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:31.834 18:03:20 -- common/autotest_common.sh@850 -- # return 0 00:16:31.834 18:03:20 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:32.770 18:03:21 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:32.770 18:03:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.770 18:03:21 -- common/autotest_common.sh@10 -- # set +x 00:16:32.770 18:03:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.770 18:03:21 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:32.770 18:03:21 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:32.770 18:03:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.770 18:03:21 -- common/autotest_common.sh@10 -- # set +x 00:16:32.770 malloc0 00:16:32.770 18:03:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.770 18:03:21 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:32.770 18:03:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.770 18:03:21 -- common/autotest_common.sh@10 -- # set +x 00:16:33.028 18:03:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.028 18:03:21 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:33.028 18:03:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.028 18:03:21 -- common/autotest_common.sh@10 -- # set +x 00:16:33.028 18:03:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.028 18:03:21 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:33.028 18:03:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.028 18:03:21 -- common/autotest_common.sh@10 -- # set +x 00:16:33.028 18:03:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.028 18:03:21 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:33.028 18:03:21 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:05.109 Fuzzing completed. Shutting down the fuzz application 00:17:05.109 00:17:05.109 Dumping successful admin opcodes: 00:17:05.109 8, 9, 10, 24, 00:17:05.109 Dumping successful io opcodes: 00:17:05.109 0, 00:17:05.109 NS: 0x200003a1ef00 I/O qp, Total commands completed: 412469, total successful commands: 1621, random_seed: 2054122880 00:17:05.109 NS: 0x200003a1ef00 admin qp, Total commands completed: 52806, total successful commands: 423, random_seed: 2925680896 00:17:05.109 18:03:53 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:05.109 18:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.109 18:03:53 -- common/autotest_common.sh@10 -- # set +x 00:17:05.109 18:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.109 18:03:53 -- target/vfio_user_fuzz.sh@46 -- # killprocess 3299694 00:17:05.109 18:03:53 -- common/autotest_common.sh@936 -- # '[' -z 3299694 ']' 00:17:05.109 18:03:53 -- common/autotest_common.sh@940 -- # kill -0 3299694 00:17:05.109 18:03:53 -- common/autotest_common.sh@941 -- # uname 00:17:05.109 18:03:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:05.109 18:03:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3299694 00:17:05.109 18:03:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:05.109 18:03:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:05.109 18:03:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3299694' 00:17:05.109 killing process with pid 3299694 00:17:05.109 18:03:53 -- common/autotest_common.sh@955 -- # kill 3299694 00:17:05.109 18:03:53 -- common/autotest_common.sh@960 -- # wait 3299694 00:17:05.109 18:03:53 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:05.109 18:03:53 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:05.109 00:17:05.109 real 0m33.261s 00:17:05.109 user 0m32.300s 00:17:05.109 sys 0m22.752s 00:17:05.109 18:03:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:05.109 18:03:53 -- common/autotest_common.sh@10 -- # set +x 00:17:05.109 ************************************ 00:17:05.109 END TEST nvmf_vfio_user_fuzz 00:17:05.109 ************************************ 00:17:05.109 18:03:53 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:05.109 18:03:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:05.109 18:03:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:05.109 18:03:53 -- common/autotest_common.sh@10 -- # set +x 00:17:05.109 ************************************ 00:17:05.109 START TEST nvmf_host_management 00:17:05.109 ************************************ 00:17:05.109 18:03:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:05.109 * Looking for test storage... 00:17:05.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:05.109 18:03:53 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:05.109 18:03:53 -- nvmf/common.sh@7 -- # uname -s 00:17:05.109 18:03:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.109 18:03:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.109 18:03:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.109 18:03:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.109 18:03:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.109 18:03:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.109 18:03:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.109 18:03:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.109 18:03:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.109 18:03:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.109 18:03:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:05.109 18:03:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:05.109 18:03:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.109 18:03:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.109 18:03:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:05.109 18:03:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.109 18:03:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:05.109 18:03:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.109 18:03:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.109 18:03:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.110 18:03:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.110 18:03:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.110 18:03:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.110 18:03:53 -- paths/export.sh@5 -- # export PATH 00:17:05.110 18:03:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.110 18:03:53 -- nvmf/common.sh@47 -- # : 0 00:17:05.110 18:03:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:05.110 18:03:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:05.110 18:03:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.110 18:03:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.110 18:03:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.110 18:03:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:05.110 18:03:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:05.110 18:03:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:05.110 18:03:53 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:05.110 18:03:53 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:05.110 18:03:53 -- target/host_management.sh@104 -- # nvmftestinit 00:17:05.110 18:03:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:05.110 18:03:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.110 18:03:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:05.110 18:03:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:05.110 18:03:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:05.110 18:03:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.110 18:03:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.110 18:03:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.110 18:03:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:05.110 18:03:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:05.110 18:03:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:05.110 18:03:53 -- common/autotest_common.sh@10 -- # set +x 00:17:07.079 18:03:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:07.079 18:03:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:07.079 18:03:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:07.079 18:03:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:07.079 18:03:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:07.079 18:03:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:07.079 18:03:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:07.079 18:03:55 -- nvmf/common.sh@295 -- # net_devs=() 00:17:07.079 18:03:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:07.079 18:03:55 -- nvmf/common.sh@296 -- # e810=() 00:17:07.079 18:03:55 -- nvmf/common.sh@296 -- # local -ga e810 00:17:07.079 18:03:55 -- nvmf/common.sh@297 -- # x722=() 00:17:07.079 18:03:55 -- nvmf/common.sh@297 -- # local -ga x722 00:17:07.079 18:03:55 -- nvmf/common.sh@298 -- # mlx=() 00:17:07.079 18:03:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:07.079 18:03:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:07.079 18:03:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:07.079 18:03:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:07.079 18:03:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:07.079 18:03:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:07.079 18:03:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:07.079 18:03:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:07.079 18:03:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:07.079 18:03:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:07.079 18:03:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:07.079 18:03:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:07.079 18:03:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:07.079 18:03:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:07.079 18:03:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:07.079 18:03:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:07.079 18:03:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:07.079 18:03:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:07.079 18:03:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:07.079 18:03:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:07.079 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:07.079 18:03:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:07.079 18:03:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:07.079 18:03:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.079 18:03:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.079 18:03:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:07.079 18:03:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:07.079 18:03:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:07.079 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:07.079 18:03:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:07.079 18:03:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:07.079 18:03:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.079 18:03:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.079 18:03:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:07.079 18:03:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:07.079 18:03:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:07.079 18:03:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:07.079 18:03:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:07.079 18:03:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.079 18:03:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:07.080 18:03:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.080 18:03:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:07.080 Found net devices under 0000:84:00.0: cvl_0_0 00:17:07.080 18:03:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.080 18:03:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:07.080 18:03:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.080 18:03:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:07.080 18:03:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.080 18:03:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:07.080 Found net devices under 0000:84:00.1: cvl_0_1 00:17:07.080 18:03:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.080 18:03:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:07.080 18:03:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:07.080 18:03:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:07.080 18:03:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:07.080 18:03:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:07.080 18:03:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:07.080 18:03:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:07.080 18:03:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:07.080 18:03:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:07.080 18:03:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:07.080 18:03:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:07.080 18:03:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:07.080 18:03:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:07.080 18:03:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:07.080 18:03:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:07.080 18:03:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:07.080 18:03:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:07.080 18:03:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:07.080 18:03:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:07.080 18:03:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:07.080 18:03:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:07.080 18:03:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:07.080 18:03:56 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:07.080 18:03:56 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:07.080 18:03:56 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:07.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:17:07.080 00:17:07.080 --- 10.0.0.2 ping statistics --- 00:17:07.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.080 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:17:07.080 18:03:56 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:07.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:17:07.080 00:17:07.080 --- 10.0.0.1 ping statistics --- 00:17:07.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.080 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:17:07.080 18:03:56 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.080 18:03:56 -- nvmf/common.sh@411 -- # return 0 00:17:07.080 18:03:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:07.080 18:03:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.080 18:03:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:07.080 18:03:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:07.080 18:03:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.080 18:03:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:07.080 18:03:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:07.339 18:03:56 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:17:07.339 18:03:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:07.339 18:03:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:07.339 18:03:56 -- common/autotest_common.sh@10 -- # set +x 00:17:07.339 ************************************ 00:17:07.339 START TEST nvmf_host_management 00:17:07.339 ************************************ 00:17:07.339 18:03:56 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:17:07.339 18:03:56 -- target/host_management.sh@69 -- # starttarget 00:17:07.339 18:03:56 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:07.339 18:03:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:07.339 18:03:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:07.339 18:03:56 -- common/autotest_common.sh@10 -- # set +x 00:17:07.339 18:03:56 -- nvmf/common.sh@470 -- # nvmfpid=3305133 00:17:07.339 18:03:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:07.339 18:03:56 -- nvmf/common.sh@471 -- # waitforlisten 3305133 00:17:07.339 18:03:56 -- common/autotest_common.sh@817 -- # '[' -z 3305133 ']' 00:17:07.339 18:03:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.339 18:03:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:07.339 18:03:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.339 18:03:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:07.339 18:03:56 -- common/autotest_common.sh@10 -- # set +x 00:17:07.339 [2024-04-15 18:03:56.203582] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:17:07.339 [2024-04-15 18:03:56.203665] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.339 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.339 [2024-04-15 18:03:56.285660] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:07.598 [2024-04-15 18:03:56.385094] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.598 [2024-04-15 18:03:56.385159] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.598 [2024-04-15 18:03:56.385177] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.598 [2024-04-15 18:03:56.385191] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.598 [2024-04-15 18:03:56.385203] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.598 [2024-04-15 18:03:56.385297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.598 [2024-04-15 18:03:56.385356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:07.598 [2024-04-15 18:03:56.385411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:07.598 [2024-04-15 18:03:56.385413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.598 18:03:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:07.598 18:03:56 -- common/autotest_common.sh@850 -- # return 0 00:17:07.598 18:03:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:07.598 18:03:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:07.598 18:03:56 -- common/autotest_common.sh@10 -- # set +x 00:17:07.598 18:03:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.598 18:03:56 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:07.598 18:03:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.598 18:03:56 -- common/autotest_common.sh@10 -- # set +x 00:17:07.598 [2024-04-15 18:03:56.540828] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.598 18:03:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.598 18:03:56 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:07.598 18:03:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:07.598 18:03:56 -- common/autotest_common.sh@10 -- # set +x 00:17:07.857 18:03:56 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:07.857 18:03:56 -- target/host_management.sh@23 -- # cat 00:17:07.857 18:03:56 -- target/host_management.sh@30 -- # rpc_cmd 00:17:07.857 18:03:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.857 18:03:56 -- common/autotest_common.sh@10 -- # set +x 00:17:07.857 Malloc0 00:17:07.857 [2024-04-15 18:03:56.601557] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.857 18:03:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.857 18:03:56 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:07.857 18:03:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:07.857 18:03:56 -- common/autotest_common.sh@10 -- # set +x 00:17:07.857 18:03:56 -- target/host_management.sh@73 -- # perfpid=3305246 00:17:07.857 18:03:56 -- target/host_management.sh@74 -- # waitforlisten 3305246 /var/tmp/bdevperf.sock 00:17:07.857 18:03:56 -- common/autotest_common.sh@817 -- # '[' -z 3305246 ']' 00:17:07.857 18:03:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:07.857 18:03:56 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:07.857 18:03:56 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:07.857 18:03:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:07.857 18:03:56 -- nvmf/common.sh@521 -- # config=() 00:17:07.857 18:03:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:07.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:07.857 18:03:56 -- nvmf/common.sh@521 -- # local subsystem config 00:17:07.857 18:03:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:07.857 18:03:56 -- common/autotest_common.sh@10 -- # set +x 00:17:07.857 18:03:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:07.857 18:03:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:07.857 { 00:17:07.857 "params": { 00:17:07.857 "name": "Nvme$subsystem", 00:17:07.857 "trtype": "$TEST_TRANSPORT", 00:17:07.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.857 "adrfam": "ipv4", 00:17:07.857 "trsvcid": "$NVMF_PORT", 00:17:07.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.857 "hdgst": ${hdgst:-false}, 00:17:07.857 "ddgst": ${ddgst:-false} 00:17:07.857 }, 00:17:07.857 "method": "bdev_nvme_attach_controller" 00:17:07.857 } 00:17:07.857 EOF 00:17:07.857 )") 00:17:07.857 18:03:56 -- nvmf/common.sh@543 -- # cat 00:17:07.857 18:03:56 -- nvmf/common.sh@545 -- # jq . 00:17:07.857 18:03:56 -- nvmf/common.sh@546 -- # IFS=, 00:17:07.857 18:03:56 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:07.857 "params": { 00:17:07.857 "name": "Nvme0", 00:17:07.857 "trtype": "tcp", 00:17:07.857 "traddr": "10.0.0.2", 00:17:07.857 "adrfam": "ipv4", 00:17:07.857 "trsvcid": "4420", 00:17:07.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:07.857 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:07.857 "hdgst": false, 00:17:07.857 "ddgst": false 00:17:07.857 }, 00:17:07.857 "method": "bdev_nvme_attach_controller" 00:17:07.857 }' 00:17:07.857 [2024-04-15 18:03:56.680730] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:17:07.857 [2024-04-15 18:03:56.680817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3305246 ] 00:17:07.857 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.857 [2024-04-15 18:03:56.747818] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.116 [2024-04-15 18:03:56.835178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.116 Running I/O for 10 seconds... 00:17:08.116 18:03:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:08.116 18:03:57 -- common/autotest_common.sh@850 -- # return 0 00:17:08.116 18:03:57 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:08.116 18:03:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.116 18:03:57 -- common/autotest_common.sh@10 -- # set +x 00:17:08.116 18:03:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.116 18:03:57 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:08.116 18:03:57 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:08.116 18:03:57 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:08.116 18:03:57 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:08.116 18:03:57 -- target/host_management.sh@52 -- # local ret=1 00:17:08.116 18:03:57 -- target/host_management.sh@53 -- # local i 00:17:08.116 18:03:57 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:08.116 18:03:57 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:08.116 18:03:57 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:08.116 18:03:57 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:08.116 18:03:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.116 18:03:57 -- common/autotest_common.sh@10 -- # set +x 00:17:08.374 18:03:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.374 18:03:57 -- target/host_management.sh@55 -- # read_io_count=65 00:17:08.374 18:03:57 -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:17:08.374 18:03:57 -- target/host_management.sh@62 -- # sleep 0.25 00:17:08.637 18:03:57 -- target/host_management.sh@54 -- # (( i-- )) 00:17:08.637 18:03:57 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:08.637 18:03:57 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:08.637 18:03:57 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:08.637 18:03:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.637 18:03:57 -- common/autotest_common.sh@10 -- # set +x 00:17:08.637 18:03:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.637 18:03:57 -- target/host_management.sh@55 -- # read_io_count=451 00:17:08.637 18:03:57 -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:17:08.637 18:03:57 -- target/host_management.sh@59 -- # ret=0 00:17:08.637 18:03:57 -- target/host_management.sh@60 -- # break 00:17:08.637 18:03:57 -- target/host_management.sh@64 -- # return 0 00:17:08.637 18:03:57 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:08.637 18:03:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.637 18:03:57 -- common/autotest_common.sh@10 -- # set +x 00:17:08.637 [2024-04-15 18:03:57.412243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412339] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412353] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412523] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412598] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412624] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412637] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412790] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.412999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.413012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.413025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.413038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.413050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.413085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.413100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.413113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.413128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.413141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.413156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.413169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128070 is same with the state(5) to be set 00:17:08.637 [2024-04-15 18:03:57.413336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.637 [2024-04-15 18:03:57.413394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.637 [2024-04-15 18:03:57.413428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.637 [2024-04-15 18:03:57.413444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.413462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.413476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.413492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.413512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.413529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.413543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.413559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.413573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.413589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.413603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.413618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.413633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.413653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.413666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.413682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.413696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.413712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.413726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.413743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.413757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.413773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.413787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.413804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.413818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.413834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.413849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.413865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.413879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.413899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.413914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.413929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.413943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.413959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.413973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.413988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.638 [2024-04-15 18:03:57.414713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.638 [2024-04-15 18:03:57.414732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.414748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.414765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.414779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.414794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.414809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.414825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.414839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.414855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.414869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.414884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.414901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.414917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.414931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.414947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.414962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.414978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.414992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.415008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.415022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.415037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.415052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.415091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.415116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.415133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.415157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.415175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.415190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.415206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.415220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.415236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.415250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.415267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.415281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.415297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.415312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.415338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.415354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.415385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.415400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.415417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.415431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.415447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.415461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.415477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.639 [2024-04-15 18:03:57.415491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.639 [2024-04-15 18:03:57.415507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9573c0 is same with the state(5) to be set 00:17:08.639 [2024-04-15 18:03:57.415596] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9573c0 was disconnected and freed. reset controller. 00:17:08.639 18:03:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.639 [2024-04-15 18:03:57.416775] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:08.639 18:03:57 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:08.639 18:03:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.639 18:03:57 -- common/autotest_common.sh@10 -- # set +x 00:17:08.639 task offset: 65536 on job bdev=Nvme0n1 fails 00:17:08.639 00:17:08.639 Latency(us) 00:17:08.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.639 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:08.639 Job: Nvme0n1 ended in about 0.40 seconds with error 00:17:08.639 Verification LBA range: start 0x0 length 0x400 00:17:08.639 Nvme0n1 : 0.40 1277.51 79.84 159.69 0.00 43302.90 11359.57 35729.26 00:17:08.639 =================================================================================================================== 00:17:08.639 Total : 1277.51 79.84 159.69 0.00 43302.90 11359.57 35729.26 00:17:08.639 [2024-04-15 18:03:57.418989] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:08.639 [2024-04-15 18:03:57.419021] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95d1a0 (9): Bad file descriptor 00:17:08.639 18:03:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.639 18:03:57 -- target/host_management.sh@87 -- # sleep 1 00:17:08.639 [2024-04-15 18:03:57.551301] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:09.575 18:03:58 -- target/host_management.sh@91 -- # kill -9 3305246 00:17:09.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3305246) - No such process 00:17:09.575 18:03:58 -- target/host_management.sh@91 -- # true 00:17:09.575 18:03:58 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:09.575 18:03:58 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:09.575 18:03:58 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:09.575 18:03:58 -- nvmf/common.sh@521 -- # config=() 00:17:09.575 18:03:58 -- nvmf/common.sh@521 -- # local subsystem config 00:17:09.575 18:03:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:09.575 18:03:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:09.575 { 00:17:09.575 "params": { 00:17:09.575 "name": "Nvme$subsystem", 00:17:09.575 "trtype": "$TEST_TRANSPORT", 00:17:09.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:09.576 "adrfam": "ipv4", 00:17:09.576 "trsvcid": "$NVMF_PORT", 00:17:09.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:09.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:09.576 "hdgst": ${hdgst:-false}, 00:17:09.576 "ddgst": ${ddgst:-false} 00:17:09.576 }, 00:17:09.576 "method": "bdev_nvme_attach_controller" 00:17:09.576 } 00:17:09.576 EOF 00:17:09.576 )") 00:17:09.576 18:03:58 -- nvmf/common.sh@543 -- # cat 00:17:09.576 18:03:58 -- nvmf/common.sh@545 -- # jq . 00:17:09.576 18:03:58 -- nvmf/common.sh@546 -- # IFS=, 00:17:09.576 18:03:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:09.576 "params": { 00:17:09.576 "name": "Nvme0", 00:17:09.576 "trtype": "tcp", 00:17:09.576 "traddr": "10.0.0.2", 00:17:09.576 "adrfam": "ipv4", 00:17:09.576 "trsvcid": "4420", 00:17:09.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:09.576 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:09.576 "hdgst": false, 00:17:09.576 "ddgst": false 00:17:09.576 }, 00:17:09.576 "method": "bdev_nvme_attach_controller" 00:17:09.576 }' 00:17:09.576 [2024-04-15 18:03:58.478985] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:17:09.576 [2024-04-15 18:03:58.479109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3305424 ] 00:17:09.576 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.835 [2024-04-15 18:03:58.557581] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.835 [2024-04-15 18:03:58.646121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.095 Running I/O for 1 seconds... 00:17:11.474 00:17:11.474 Latency(us) 00:17:11.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.474 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.474 Verification LBA range: start 0x0 length 0x400 00:17:11.474 Nvme0n1 : 1.04 1477.25 92.33 0.00 0.00 42660.85 9951.76 36505.98 00:17:11.474 =================================================================================================================== 00:17:11.474 Total : 1477.25 92.33 0.00 0.00 42660.85 9951.76 36505.98 00:17:11.474 18:04:00 -- target/host_management.sh@101 -- # stoptarget 00:17:11.474 18:04:00 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:11.474 18:04:00 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:11.474 18:04:00 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:11.474 18:04:00 -- target/host_management.sh@40 -- # nvmftestfini 00:17:11.474 18:04:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:11.474 18:04:00 -- nvmf/common.sh@117 -- # sync 00:17:11.474 18:04:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:11.474 18:04:00 -- nvmf/common.sh@120 -- # set +e 00:17:11.474 18:04:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:11.474 18:04:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:11.475 rmmod nvme_tcp 00:17:11.475 rmmod nvme_fabrics 00:17:11.475 rmmod nvme_keyring 00:17:11.475 18:04:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:11.475 18:04:00 -- nvmf/common.sh@124 -- # set -e 00:17:11.475 18:04:00 -- nvmf/common.sh@125 -- # return 0 00:17:11.475 18:04:00 -- nvmf/common.sh@478 -- # '[' -n 3305133 ']' 00:17:11.475 18:04:00 -- nvmf/common.sh@479 -- # killprocess 3305133 00:17:11.475 18:04:00 -- common/autotest_common.sh@936 -- # '[' -z 3305133 ']' 00:17:11.475 18:04:00 -- common/autotest_common.sh@940 -- # kill -0 3305133 00:17:11.475 18:04:00 -- common/autotest_common.sh@941 -- # uname 00:17:11.475 18:04:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:11.475 18:04:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3305133 00:17:11.475 18:04:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:11.475 18:04:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:11.475 18:04:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3305133' 00:17:11.475 killing process with pid 3305133 00:17:11.475 18:04:00 -- common/autotest_common.sh@955 -- # kill 3305133 00:17:11.475 18:04:00 -- common/autotest_common.sh@960 -- # wait 3305133 00:17:11.733 [2024-04-15 18:04:00.516464] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:11.733 18:04:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:11.733 18:04:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:11.733 18:04:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:11.733 18:04:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:11.733 18:04:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:11.733 18:04:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.733 18:04:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.733 18:04:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.637 18:04:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:13.637 00:17:13.637 real 0m6.439s 00:17:13.637 user 0m18.902s 00:17:13.637 sys 0m1.335s 00:17:13.637 18:04:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:13.637 18:04:02 -- common/autotest_common.sh@10 -- # set +x 00:17:13.637 ************************************ 00:17:13.637 END TEST nvmf_host_management 00:17:13.637 ************************************ 00:17:13.897 18:04:02 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:13.897 00:17:13.897 real 0m8.920s 00:17:13.897 user 0m19.715s 00:17:13.897 sys 0m3.023s 00:17:13.897 18:04:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:13.897 18:04:02 -- common/autotest_common.sh@10 -- # set +x 00:17:13.897 ************************************ 00:17:13.897 END TEST nvmf_host_management 00:17:13.897 ************************************ 00:17:13.897 18:04:02 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:13.897 18:04:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:13.897 18:04:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:13.897 18:04:02 -- common/autotest_common.sh@10 -- # set +x 00:17:13.897 ************************************ 00:17:13.897 START TEST nvmf_lvol 00:17:13.897 ************************************ 00:17:13.897 18:04:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:13.897 * Looking for test storage... 00:17:13.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:13.897 18:04:02 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:13.897 18:04:02 -- nvmf/common.sh@7 -- # uname -s 00:17:13.897 18:04:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.897 18:04:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.897 18:04:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.897 18:04:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.897 18:04:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.897 18:04:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.897 18:04:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.897 18:04:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.897 18:04:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.897 18:04:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.897 18:04:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:13.897 18:04:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:13.897 18:04:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.897 18:04:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.897 18:04:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:13.897 18:04:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.897 18:04:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:13.897 18:04:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.897 18:04:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.897 18:04:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.897 18:04:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.897 18:04:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.897 18:04:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.897 18:04:02 -- paths/export.sh@5 -- # export PATH 00:17:13.897 18:04:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.897 18:04:02 -- nvmf/common.sh@47 -- # : 0 00:17:13.897 18:04:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:13.897 18:04:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:13.897 18:04:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.897 18:04:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.897 18:04:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.897 18:04:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:13.897 18:04:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:13.897 18:04:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:13.897 18:04:02 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:13.897 18:04:02 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:13.897 18:04:02 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:13.897 18:04:02 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:13.897 18:04:02 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:13.897 18:04:02 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:13.897 18:04:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:13.897 18:04:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.897 18:04:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:13.897 18:04:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:13.897 18:04:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:13.897 18:04:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.897 18:04:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.897 18:04:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.897 18:04:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:13.897 18:04:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:13.897 18:04:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:13.897 18:04:02 -- common/autotest_common.sh@10 -- # set +x 00:17:16.429 18:04:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:16.429 18:04:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.429 18:04:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.430 18:04:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.430 18:04:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.430 18:04:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.430 18:04:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.430 18:04:05 -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.430 18:04:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.430 18:04:05 -- nvmf/common.sh@296 -- # e810=() 00:17:16.430 18:04:05 -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.430 18:04:05 -- nvmf/common.sh@297 -- # x722=() 00:17:16.430 18:04:05 -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.430 18:04:05 -- nvmf/common.sh@298 -- # mlx=() 00:17:16.430 18:04:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.430 18:04:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.430 18:04:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.430 18:04:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.430 18:04:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.430 18:04:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.430 18:04:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.430 18:04:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.430 18:04:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.430 18:04:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.430 18:04:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.430 18:04:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.430 18:04:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.430 18:04:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:16.430 18:04:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:16.430 18:04:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:16.430 18:04:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:16.430 18:04:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.430 18:04:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.430 18:04:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:16.430 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:16.430 18:04:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.430 18:04:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.430 18:04:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.430 18:04:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.430 18:04:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.430 18:04:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.430 18:04:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:16.430 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:16.430 18:04:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.430 18:04:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.430 18:04:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.430 18:04:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.430 18:04:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.430 18:04:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.430 18:04:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.430 18:04:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.430 18:04:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.430 18:04:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.430 18:04:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:16.430 18:04:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.430 18:04:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:16.430 Found net devices under 0000:84:00.0: cvl_0_0 00:17:16.430 18:04:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.430 18:04:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.430 18:04:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.430 18:04:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:16.430 18:04:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.430 18:04:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:16.430 Found net devices under 0000:84:00.1: cvl_0_1 00:17:16.430 18:04:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.430 18:04:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:16.430 18:04:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:16.430 18:04:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:16.430 18:04:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:16.430 18:04:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:16.430 18:04:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.430 18:04:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.430 18:04:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.430 18:04:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.430 18:04:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.430 18:04:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.430 18:04:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.430 18:04:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.430 18:04:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.430 18:04:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.430 18:04:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.430 18:04:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.430 18:04:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.430 18:04:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.430 18:04:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.430 18:04:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.430 18:04:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.430 18:04:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.430 18:04:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.688 18:04:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:16.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:17:16.688 00:17:16.689 --- 10.0.0.2 ping statistics --- 00:17:16.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.689 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:17:16.689 18:04:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:17:16.689 00:17:16.689 --- 10.0.0.1 ping statistics --- 00:17:16.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.689 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:17:16.689 18:04:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.689 18:04:05 -- nvmf/common.sh@411 -- # return 0 00:17:16.689 18:04:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:16.689 18:04:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.689 18:04:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:16.689 18:04:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:16.689 18:04:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.689 18:04:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:16.689 18:04:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:16.689 18:04:05 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:16.689 18:04:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:16.689 18:04:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:16.689 18:04:05 -- common/autotest_common.sh@10 -- # set +x 00:17:16.689 18:04:05 -- nvmf/common.sh@470 -- # nvmfpid=3307763 00:17:16.689 18:04:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:16.689 18:04:05 -- nvmf/common.sh@471 -- # waitforlisten 3307763 00:17:16.689 18:04:05 -- common/autotest_common.sh@817 -- # '[' -z 3307763 ']' 00:17:16.689 18:04:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.689 18:04:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:16.689 18:04:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.689 18:04:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:16.689 18:04:05 -- common/autotest_common.sh@10 -- # set +x 00:17:16.689 [2024-04-15 18:04:05.465491] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:17:16.689 [2024-04-15 18:04:05.465577] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.689 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.689 [2024-04-15 18:04:05.542157] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:16.689 [2024-04-15 18:04:05.635399] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.689 [2024-04-15 18:04:05.635465] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.689 [2024-04-15 18:04:05.635482] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.689 [2024-04-15 18:04:05.635497] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.689 [2024-04-15 18:04:05.635509] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.689 [2024-04-15 18:04:05.635603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.689 [2024-04-15 18:04:05.635660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.689 [2024-04-15 18:04:05.635663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.948 18:04:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:16.948 18:04:05 -- common/autotest_common.sh@850 -- # return 0 00:17:16.948 18:04:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:16.948 18:04:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:16.948 18:04:05 -- common/autotest_common.sh@10 -- # set +x 00:17:16.948 18:04:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.948 18:04:05 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:17.516 [2024-04-15 18:04:06.363271] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.516 18:04:06 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:17.774 18:04:06 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:17.774 18:04:06 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:18.340 18:04:07 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:18.340 18:04:07 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:18.907 18:04:07 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:19.166 18:04:07 -- target/nvmf_lvol.sh@29 -- # lvs=86b07712-3f1a-4d45-a20e-d71e4a5b83a3 00:17:19.166 18:04:07 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 86b07712-3f1a-4d45-a20e-d71e4a5b83a3 lvol 20 00:17:19.735 18:04:08 -- target/nvmf_lvol.sh@32 -- # lvol=2a8729b6-ed40-4352-9c08-a584da5f99f2 00:17:19.735 18:04:08 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:19.993 18:04:08 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2a8729b6-ed40-4352-9c08-a584da5f99f2 00:17:20.562 18:04:09 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:21.183 [2024-04-15 18:04:09.826133] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.183 18:04:09 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:21.441 18:04:10 -- target/nvmf_lvol.sh@42 -- # perf_pid=3308326 00:17:21.441 18:04:10 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:21.441 18:04:10 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:21.441 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.376 18:04:11 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2a8729b6-ed40-4352-9c08-a584da5f99f2 MY_SNAPSHOT 00:17:22.670 18:04:11 -- target/nvmf_lvol.sh@47 -- # snapshot=090ce24c-c20f-4979-bb9c-d668516d94a2 00:17:22.670 18:04:11 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2a8729b6-ed40-4352-9c08-a584da5f99f2 30 00:17:23.237 18:04:12 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 090ce24c-c20f-4979-bb9c-d668516d94a2 MY_CLONE 00:17:23.494 18:04:12 -- target/nvmf_lvol.sh@49 -- # clone=bf796dbf-3ad4-444b-8f2c-d1e9016b3a4f 00:17:23.494 18:04:12 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bf796dbf-3ad4-444b-8f2c-d1e9016b3a4f 00:17:24.429 18:04:13 -- target/nvmf_lvol.sh@53 -- # wait 3308326 00:17:32.549 Initializing NVMe Controllers 00:17:32.549 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:32.549 Controller IO queue size 128, less than required. 00:17:32.549 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:32.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:32.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:32.549 Initialization complete. Launching workers. 00:17:32.549 ======================================================== 00:17:32.549 Latency(us) 00:17:32.549 Device Information : IOPS MiB/s Average min max 00:17:32.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10366.30 40.49 12348.47 2024.56 96549.45 00:17:32.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10322.20 40.32 12407.92 2081.84 66809.57 00:17:32.549 ======================================================== 00:17:32.549 Total : 20688.50 80.81 12378.13 2024.56 96549.45 00:17:32.549 00:17:32.549 18:04:20 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:32.549 18:04:21 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2a8729b6-ed40-4352-9c08-a584da5f99f2 00:17:32.809 18:04:21 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 86b07712-3f1a-4d45-a20e-d71e4a5b83a3 00:17:33.377 18:04:22 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:33.377 18:04:22 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:33.377 18:04:22 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:33.377 18:04:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:33.377 18:04:22 -- nvmf/common.sh@117 -- # sync 00:17:33.377 18:04:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:33.377 18:04:22 -- nvmf/common.sh@120 -- # set +e 00:17:33.378 18:04:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:33.378 18:04:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:33.378 rmmod nvme_tcp 00:17:33.378 rmmod nvme_fabrics 00:17:33.378 rmmod nvme_keyring 00:17:33.378 18:04:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:33.378 18:04:22 -- nvmf/common.sh@124 -- # set -e 00:17:33.378 18:04:22 -- nvmf/common.sh@125 -- # return 0 00:17:33.378 18:04:22 -- nvmf/common.sh@478 -- # '[' -n 3307763 ']' 00:17:33.378 18:04:22 -- nvmf/common.sh@479 -- # killprocess 3307763 00:17:33.378 18:04:22 -- common/autotest_common.sh@936 -- # '[' -z 3307763 ']' 00:17:33.378 18:04:22 -- common/autotest_common.sh@940 -- # kill -0 3307763 00:17:33.378 18:04:22 -- common/autotest_common.sh@941 -- # uname 00:17:33.378 18:04:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:33.378 18:04:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3307763 00:17:33.378 18:04:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:33.378 18:04:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:33.378 18:04:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3307763' 00:17:33.378 killing process with pid 3307763 00:17:33.378 18:04:22 -- common/autotest_common.sh@955 -- # kill 3307763 00:17:33.378 18:04:22 -- common/autotest_common.sh@960 -- # wait 3307763 00:17:33.636 18:04:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:33.636 18:04:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:33.636 18:04:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:33.636 18:04:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.636 18:04:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:33.636 18:04:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.636 18:04:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.636 18:04:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.177 18:04:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:36.177 00:17:36.177 real 0m21.830s 00:17:36.177 user 1m14.644s 00:17:36.177 sys 0m6.625s 00:17:36.177 18:04:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:36.177 18:04:24 -- common/autotest_common.sh@10 -- # set +x 00:17:36.177 ************************************ 00:17:36.177 END TEST nvmf_lvol 00:17:36.177 ************************************ 00:17:36.177 18:04:24 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:36.177 18:04:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:36.177 18:04:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:36.177 18:04:24 -- common/autotest_common.sh@10 -- # set +x 00:17:36.177 ************************************ 00:17:36.177 START TEST nvmf_lvs_grow 00:17:36.177 ************************************ 00:17:36.177 18:04:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:36.177 * Looking for test storage... 00:17:36.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.177 18:04:24 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.177 18:04:24 -- nvmf/common.sh@7 -- # uname -s 00:17:36.177 18:04:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.177 18:04:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.177 18:04:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.177 18:04:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.177 18:04:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.177 18:04:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.177 18:04:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.177 18:04:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.177 18:04:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.178 18:04:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.178 18:04:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:36.178 18:04:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:36.178 18:04:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.178 18:04:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.178 18:04:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.178 18:04:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.178 18:04:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.178 18:04:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.178 18:04:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.178 18:04:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.178 18:04:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.178 18:04:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.178 18:04:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.178 18:04:24 -- paths/export.sh@5 -- # export PATH 00:17:36.178 18:04:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.178 18:04:24 -- nvmf/common.sh@47 -- # : 0 00:17:36.178 18:04:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:36.178 18:04:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:36.178 18:04:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.178 18:04:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.178 18:04:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.178 18:04:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:36.178 18:04:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:36.178 18:04:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:36.178 18:04:24 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.178 18:04:24 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:36.178 18:04:24 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:36.178 18:04:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:36.178 18:04:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.178 18:04:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:36.178 18:04:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:36.178 18:04:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:36.178 18:04:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.178 18:04:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.178 18:04:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.178 18:04:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:36.178 18:04:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:36.178 18:04:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:36.178 18:04:24 -- common/autotest_common.sh@10 -- # set +x 00:17:38.716 18:04:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:38.716 18:04:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:38.716 18:04:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:38.716 18:04:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:38.716 18:04:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:38.716 18:04:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:38.716 18:04:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:38.716 18:04:27 -- nvmf/common.sh@295 -- # net_devs=() 00:17:38.716 18:04:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:38.716 18:04:27 -- nvmf/common.sh@296 -- # e810=() 00:17:38.716 18:04:27 -- nvmf/common.sh@296 -- # local -ga e810 00:17:38.716 18:04:27 -- nvmf/common.sh@297 -- # x722=() 00:17:38.716 18:04:27 -- nvmf/common.sh@297 -- # local -ga x722 00:17:38.716 18:04:27 -- nvmf/common.sh@298 -- # mlx=() 00:17:38.716 18:04:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:38.716 18:04:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.716 18:04:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.717 18:04:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.717 18:04:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.717 18:04:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.717 18:04:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.717 18:04:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.717 18:04:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.717 18:04:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.717 18:04:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.717 18:04:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.717 18:04:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:38.717 18:04:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:38.717 18:04:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:38.717 18:04:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.717 18:04:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:38.717 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:38.717 18:04:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.717 18:04:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:38.717 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:38.717 18:04:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:38.717 18:04:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.717 18:04:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.717 18:04:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:38.717 18:04:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.717 18:04:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:38.717 Found net devices under 0000:84:00.0: cvl_0_0 00:17:38.717 18:04:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.717 18:04:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.717 18:04:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.717 18:04:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:38.717 18:04:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.717 18:04:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:38.717 Found net devices under 0000:84:00.1: cvl_0_1 00:17:38.717 18:04:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.717 18:04:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:38.717 18:04:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:38.717 18:04:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:38.717 18:04:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.717 18:04:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.717 18:04:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.717 18:04:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:38.717 18:04:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.717 18:04:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.717 18:04:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:38.717 18:04:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.717 18:04:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.717 18:04:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:38.717 18:04:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:38.717 18:04:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.717 18:04:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.717 18:04:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.717 18:04:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.717 18:04:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:38.717 18:04:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.717 18:04:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.717 18:04:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.717 18:04:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:38.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:17:38.717 00:17:38.717 --- 10.0.0.2 ping statistics --- 00:17:38.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.717 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:17:38.717 18:04:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:17:38.717 00:17:38.717 --- 10.0.0.1 ping statistics --- 00:17:38.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.717 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:17:38.717 18:04:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.717 18:04:27 -- nvmf/common.sh@411 -- # return 0 00:17:38.717 18:04:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:38.717 18:04:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.717 18:04:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:38.717 18:04:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.717 18:04:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:38.717 18:04:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:38.717 18:04:27 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:38.717 18:04:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:38.717 18:04:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:38.717 18:04:27 -- common/autotest_common.sh@10 -- # set +x 00:17:38.717 18:04:27 -- nvmf/common.sh@470 -- # nvmfpid=3311726 00:17:38.717 18:04:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:38.717 18:04:27 -- nvmf/common.sh@471 -- # waitforlisten 3311726 00:17:38.717 18:04:27 -- common/autotest_common.sh@817 -- # '[' -z 3311726 ']' 00:17:38.717 18:04:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.717 18:04:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:38.717 18:04:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.717 18:04:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:38.717 18:04:27 -- common/autotest_common.sh@10 -- # set +x 00:17:38.717 [2024-04-15 18:04:27.369819] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:17:38.717 [2024-04-15 18:04:27.369916] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.717 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.717 [2024-04-15 18:04:27.447715] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.717 [2024-04-15 18:04:27.544613] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.717 [2024-04-15 18:04:27.544685] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.717 [2024-04-15 18:04:27.544704] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.717 [2024-04-15 18:04:27.544719] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.717 [2024-04-15 18:04:27.544732] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.717 [2024-04-15 18:04:27.544777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.977 18:04:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:38.977 18:04:27 -- common/autotest_common.sh@850 -- # return 0 00:17:38.977 18:04:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:38.977 18:04:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:38.977 18:04:27 -- common/autotest_common.sh@10 -- # set +x 00:17:38.977 18:04:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.977 18:04:27 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:39.236 [2024-04-15 18:04:28.083555] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.236 18:04:28 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:39.236 18:04:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:39.236 18:04:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:39.236 18:04:28 -- common/autotest_common.sh@10 -- # set +x 00:17:39.494 ************************************ 00:17:39.494 START TEST lvs_grow_clean 00:17:39.494 ************************************ 00:17:39.494 18:04:28 -- common/autotest_common.sh@1111 -- # lvs_grow 00:17:39.494 18:04:28 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:39.494 18:04:28 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:39.494 18:04:28 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:39.494 18:04:28 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:39.494 18:04:28 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:39.494 18:04:28 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:39.494 18:04:28 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:39.494 18:04:28 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:39.494 18:04:28 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:39.752 18:04:28 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:39.752 18:04:28 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:40.012 18:04:28 -- target/nvmf_lvs_grow.sh@28 -- # lvs=8f05c52f-9fa6-4f07-838b-0c039792cea5 00:17:40.012 18:04:28 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05c52f-9fa6-4f07-838b-0c039792cea5 00:17:40.012 18:04:28 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:40.272 18:04:29 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:40.272 18:04:29 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:40.272 18:04:29 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8f05c52f-9fa6-4f07-838b-0c039792cea5 lvol 150 00:17:40.531 18:04:29 -- target/nvmf_lvs_grow.sh@33 -- # lvol=321824a2-b892-48cd-8f02-04f7e036c139 00:17:40.531 18:04:29 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:40.531 18:04:29 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:40.790 [2024-04-15 18:04:29.639671] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:40.790 [2024-04-15 18:04:29.639769] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:40.790 true 00:17:40.790 18:04:29 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05c52f-9fa6-4f07-838b-0c039792cea5 00:17:40.790 18:04:29 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:41.049 18:04:29 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:41.049 18:04:29 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:41.686 18:04:30 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 321824a2-b892-48cd-8f02-04f7e036c139 00:17:41.945 18:04:30 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:42.205 [2024-04-15 18:04:30.907527] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.205 18:04:30 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:42.464 18:04:31 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3312182 00:17:42.464 18:04:31 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:42.464 18:04:31 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:42.464 18:04:31 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3312182 /var/tmp/bdevperf.sock 00:17:42.464 18:04:31 -- common/autotest_common.sh@817 -- # '[' -z 3312182 ']' 00:17:42.464 18:04:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.464 18:04:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:42.464 18:04:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.464 18:04:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:42.464 18:04:31 -- common/autotest_common.sh@10 -- # set +x 00:17:42.464 [2024-04-15 18:04:31.248199] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:17:42.464 [2024-04-15 18:04:31.248288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3312182 ] 00:17:42.464 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.464 [2024-04-15 18:04:31.316269] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.464 [2024-04-15 18:04:31.407422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.722 18:04:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:42.722 18:04:31 -- common/autotest_common.sh@850 -- # return 0 00:17:42.722 18:04:31 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:43.288 Nvme0n1 00:17:43.289 18:04:31 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:43.547 [ 00:17:43.547 { 00:17:43.547 "name": "Nvme0n1", 00:17:43.547 "aliases": [ 00:17:43.547 "321824a2-b892-48cd-8f02-04f7e036c139" 00:17:43.547 ], 00:17:43.547 "product_name": "NVMe disk", 00:17:43.547 "block_size": 4096, 00:17:43.547 "num_blocks": 38912, 00:17:43.547 "uuid": "321824a2-b892-48cd-8f02-04f7e036c139", 00:17:43.547 "assigned_rate_limits": { 00:17:43.547 "rw_ios_per_sec": 0, 00:17:43.547 "rw_mbytes_per_sec": 0, 00:17:43.547 "r_mbytes_per_sec": 0, 00:17:43.547 "w_mbytes_per_sec": 0 00:17:43.547 }, 00:17:43.547 "claimed": false, 00:17:43.547 "zoned": false, 00:17:43.547 "supported_io_types": { 00:17:43.547 "read": true, 00:17:43.547 "write": true, 00:17:43.547 "unmap": true, 00:17:43.547 "write_zeroes": true, 00:17:43.547 "flush": true, 00:17:43.547 "reset": true, 00:17:43.547 "compare": true, 00:17:43.547 "compare_and_write": true, 00:17:43.547 "abort": true, 00:17:43.547 "nvme_admin": true, 00:17:43.547 "nvme_io": true 00:17:43.547 }, 00:17:43.547 "memory_domains": [ 00:17:43.547 { 00:17:43.547 "dma_device_id": "system", 00:17:43.547 "dma_device_type": 1 00:17:43.547 } 00:17:43.547 ], 00:17:43.547 "driver_specific": { 00:17:43.547 "nvme": [ 00:17:43.547 { 00:17:43.547 "trid": { 00:17:43.547 "trtype": "TCP", 00:17:43.547 "adrfam": "IPv4", 00:17:43.547 "traddr": "10.0.0.2", 00:17:43.547 "trsvcid": "4420", 00:17:43.547 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:43.547 }, 00:17:43.547 "ctrlr_data": { 00:17:43.547 "cntlid": 1, 00:17:43.547 "vendor_id": "0x8086", 00:17:43.547 "model_number": "SPDK bdev Controller", 00:17:43.547 "serial_number": "SPDK0", 00:17:43.547 "firmware_revision": "24.05", 00:17:43.547 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:43.547 "oacs": { 00:17:43.547 "security": 0, 00:17:43.547 "format": 0, 00:17:43.547 "firmware": 0, 00:17:43.547 "ns_manage": 0 00:17:43.547 }, 00:17:43.547 "multi_ctrlr": true, 00:17:43.547 "ana_reporting": false 00:17:43.547 }, 00:17:43.547 "vs": { 00:17:43.547 "nvme_version": "1.3" 00:17:43.547 }, 00:17:43.547 "ns_data": { 00:17:43.547 "id": 1, 00:17:43.547 "can_share": true 00:17:43.547 } 00:17:43.547 } 00:17:43.547 ], 00:17:43.547 "mp_policy": "active_passive" 00:17:43.547 } 00:17:43.547 } 00:17:43.547 ] 00:17:43.547 18:04:32 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3312316 00:17:43.547 18:04:32 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:43.547 18:04:32 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:43.547 Running I/O for 10 seconds... 00:17:44.482 Latency(us) 00:17:44.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.482 Nvme0n1 : 1.00 12917.00 50.46 0.00 0.00 0.00 0.00 0.00 00:17:44.482 =================================================================================================================== 00:17:44.482 Total : 12917.00 50.46 0.00 0.00 0.00 0.00 0.00 00:17:44.482 00:17:45.417 18:04:34 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8f05c52f-9fa6-4f07-838b-0c039792cea5 00:17:45.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.675 Nvme0n1 : 2.00 13062.50 51.03 0.00 0.00 0.00 0.00 0.00 00:17:45.675 =================================================================================================================== 00:17:45.675 Total : 13062.50 51.03 0.00 0.00 0.00 0.00 0.00 00:17:45.675 00:17:45.933 true 00:17:45.933 18:04:34 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05c52f-9fa6-4f07-838b-0c039792cea5 00:17:45.933 18:04:34 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:46.191 18:04:35 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:46.191 18:04:35 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:46.191 18:04:35 -- target/nvmf_lvs_grow.sh@65 -- # wait 3312316 00:17:46.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:46.758 Nvme0n1 : 3.00 13177.67 51.48 0.00 0.00 0.00 0.00 0.00 00:17:46.758 =================================================================================================================== 00:17:46.758 Total : 13177.67 51.48 0.00 0.00 0.00 0.00 0.00 00:17:46.758 00:17:47.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:47.692 Nvme0n1 : 4.00 13245.25 51.74 0.00 0.00 0.00 0.00 0.00 00:17:47.692 =================================================================================================================== 00:17:47.692 Total : 13245.25 51.74 0.00 0.00 0.00 0.00 0.00 00:17:47.692 00:17:48.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.626 Nvme0n1 : 5.00 13277.80 51.87 0.00 0.00 0.00 0.00 0.00 00:17:48.626 =================================================================================================================== 00:17:48.626 Total : 13277.80 51.87 0.00 0.00 0.00 0.00 0.00 00:17:48.626 00:17:49.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:49.562 Nvme0n1 : 6.00 13314.17 52.01 0.00 0.00 0.00 0.00 0.00 00:17:49.562 =================================================================================================================== 00:17:49.562 Total : 13314.17 52.01 0.00 0.00 0.00 0.00 0.00 00:17:49.562 00:17:50.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.497 Nvme0n1 : 7.00 13342.43 52.12 0.00 0.00 0.00 0.00 0.00 00:17:50.497 =================================================================================================================== 00:17:50.497 Total : 13342.43 52.12 0.00 0.00 0.00 0.00 0.00 00:17:50.497 00:17:51.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.872 Nvme0n1 : 8.00 13364.62 52.21 0.00 0.00 0.00 0.00 0.00 00:17:51.872 =================================================================================================================== 00:17:51.872 Total : 13364.62 52.21 0.00 0.00 0.00 0.00 0.00 00:17:51.872 00:17:52.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.808 Nvme0n1 : 9.00 13390.78 52.31 0.00 0.00 0.00 0.00 0.00 00:17:52.808 =================================================================================================================== 00:17:52.808 Total : 13390.78 52.31 0.00 0.00 0.00 0.00 0.00 00:17:52.808 00:17:53.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.771 Nvme0n1 : 10.00 13422.90 52.43 0.00 0.00 0.00 0.00 0.00 00:17:53.771 =================================================================================================================== 00:17:53.771 Total : 13422.90 52.43 0.00 0.00 0.00 0.00 0.00 00:17:53.771 00:17:53.771 00:17:53.771 Latency(us) 00:17:53.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.771 Nvme0n1 : 10.01 13423.21 52.43 0.00 0.00 9527.48 7767.23 18738.44 00:17:53.771 =================================================================================================================== 00:17:53.771 Total : 13423.21 52.43 0.00 0.00 9527.48 7767.23 18738.44 00:17:53.771 0 00:17:53.771 18:04:42 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3312182 00:17:53.771 18:04:42 -- common/autotest_common.sh@936 -- # '[' -z 3312182 ']' 00:17:53.771 18:04:42 -- common/autotest_common.sh@940 -- # kill -0 3312182 00:17:53.771 18:04:42 -- common/autotest_common.sh@941 -- # uname 00:17:53.771 18:04:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:53.771 18:04:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3312182 00:17:53.771 18:04:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:53.771 18:04:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:53.771 18:04:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3312182' 00:17:53.771 killing process with pid 3312182 00:17:53.771 18:04:42 -- common/autotest_common.sh@955 -- # kill 3312182 00:17:53.771 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.771 00:17:53.771 Latency(us) 00:17:53.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.771 =================================================================================================================== 00:17:53.771 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.771 18:04:42 -- common/autotest_common.sh@960 -- # wait 3312182 00:17:54.030 18:04:42 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:54.288 18:04:43 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05c52f-9fa6-4f07-838b-0c039792cea5 00:17:54.288 18:04:43 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:54.546 18:04:43 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:54.546 18:04:43 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:54.546 18:04:43 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:54.805 [2024-04-15 18:04:43.726362] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:54.805 18:04:43 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05c52f-9fa6-4f07-838b-0c039792cea5 00:17:54.805 18:04:43 -- common/autotest_common.sh@638 -- # local es=0 00:17:54.805 18:04:43 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05c52f-9fa6-4f07-838b-0c039792cea5 00:17:54.805 18:04:43 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.805 18:04:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:54.805 18:04:43 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.805 18:04:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:54.805 18:04:43 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.805 18:04:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:54.805 18:04:43 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.805 18:04:43 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:54.805 18:04:43 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05c52f-9fa6-4f07-838b-0c039792cea5 00:17:55.374 request: 00:17:55.374 { 00:17:55.374 "uuid": "8f05c52f-9fa6-4f07-838b-0c039792cea5", 00:17:55.374 "method": "bdev_lvol_get_lvstores", 00:17:55.374 "req_id": 1 00:17:55.374 } 00:17:55.374 Got JSON-RPC error response 00:17:55.374 response: 00:17:55.374 { 00:17:55.374 "code": -19, 00:17:55.374 "message": "No such device" 00:17:55.374 } 00:17:55.374 18:04:44 -- common/autotest_common.sh@641 -- # es=1 00:17:55.374 18:04:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:55.374 18:04:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:55.374 18:04:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:55.374 18:04:44 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:55.374 aio_bdev 00:17:55.634 18:04:44 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 321824a2-b892-48cd-8f02-04f7e036c139 00:17:55.634 18:04:44 -- common/autotest_common.sh@885 -- # local bdev_name=321824a2-b892-48cd-8f02-04f7e036c139 00:17:55.634 18:04:44 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:55.634 18:04:44 -- common/autotest_common.sh@887 -- # local i 00:17:55.634 18:04:44 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:55.634 18:04:44 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:55.634 18:04:44 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:55.930 18:04:44 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 321824a2-b892-48cd-8f02-04f7e036c139 -t 2000 00:17:56.189 [ 00:17:56.189 { 00:17:56.189 "name": "321824a2-b892-48cd-8f02-04f7e036c139", 00:17:56.189 "aliases": [ 00:17:56.189 "lvs/lvol" 00:17:56.189 ], 00:17:56.189 "product_name": "Logical Volume", 00:17:56.189 "block_size": 4096, 00:17:56.189 "num_blocks": 38912, 00:17:56.189 "uuid": "321824a2-b892-48cd-8f02-04f7e036c139", 00:17:56.189 "assigned_rate_limits": { 00:17:56.189 "rw_ios_per_sec": 0, 00:17:56.189 "rw_mbytes_per_sec": 0, 00:17:56.189 "r_mbytes_per_sec": 0, 00:17:56.189 "w_mbytes_per_sec": 0 00:17:56.189 }, 00:17:56.189 "claimed": false, 00:17:56.189 "zoned": false, 00:17:56.189 "supported_io_types": { 00:17:56.189 "read": true, 00:17:56.189 "write": true, 00:17:56.189 "unmap": true, 00:17:56.189 "write_zeroes": true, 00:17:56.189 "flush": false, 00:17:56.189 "reset": true, 00:17:56.189 "compare": false, 00:17:56.189 "compare_and_write": false, 00:17:56.189 "abort": false, 00:17:56.189 "nvme_admin": false, 00:17:56.189 "nvme_io": false 00:17:56.189 }, 00:17:56.189 "driver_specific": { 00:17:56.189 "lvol": { 00:17:56.189 "lvol_store_uuid": "8f05c52f-9fa6-4f07-838b-0c039792cea5", 00:17:56.189 "base_bdev": "aio_bdev", 00:17:56.189 "thin_provision": false, 00:17:56.189 "snapshot": false, 00:17:56.189 "clone": false, 00:17:56.189 "esnap_clone": false 00:17:56.189 } 00:17:56.189 } 00:17:56.189 } 00:17:56.189 ] 00:17:56.189 18:04:44 -- common/autotest_common.sh@893 -- # return 0 00:17:56.189 18:04:44 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05c52f-9fa6-4f07-838b-0c039792cea5 00:17:56.189 18:04:44 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:56.447 18:04:45 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:56.447 18:04:45 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05c52f-9fa6-4f07-838b-0c039792cea5 00:17:56.447 18:04:45 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:56.704 18:04:45 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:56.704 18:04:45 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 321824a2-b892-48cd-8f02-04f7e036c139 00:17:56.962 18:04:45 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8f05c52f-9fa6-4f07-838b-0c039792cea5 00:17:57.221 18:04:46 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:57.480 18:04:46 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:57.480 00:17:57.480 real 0m18.178s 00:17:57.480 user 0m17.794s 00:17:57.480 sys 0m2.101s 00:17:57.480 18:04:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:57.480 18:04:46 -- common/autotest_common.sh@10 -- # set +x 00:17:57.480 ************************************ 00:17:57.480 END TEST lvs_grow_clean 00:17:57.480 ************************************ 00:17:57.480 18:04:46 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:57.480 18:04:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:57.480 18:04:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:57.480 18:04:46 -- common/autotest_common.sh@10 -- # set +x 00:17:57.739 ************************************ 00:17:57.739 START TEST lvs_grow_dirty 00:17:57.739 ************************************ 00:17:57.739 18:04:46 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:17:57.739 18:04:46 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:57.739 18:04:46 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:57.739 18:04:46 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:57.739 18:04:46 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:57.739 18:04:46 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:57.739 18:04:46 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:57.739 18:04:46 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:57.739 18:04:46 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:57.739 18:04:46 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:57.997 18:04:46 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:57.997 18:04:46 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:58.255 18:04:47 -- target/nvmf_lvs_grow.sh@28 -- # lvs=58096651-048a-42ce-a203-fa0ae89fc277 00:17:58.255 18:04:47 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58096651-048a-42ce-a203-fa0ae89fc277 00:17:58.255 18:04:47 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:58.513 18:04:47 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:58.513 18:04:47 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:58.513 18:04:47 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 58096651-048a-42ce-a203-fa0ae89fc277 lvol 150 00:17:59.080 18:04:47 -- target/nvmf_lvs_grow.sh@33 -- # lvol=0d1221bd-20ed-4f26-8ba1-e1e1f6966283 00:17:59.080 18:04:47 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:59.080 18:04:47 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:59.080 [2024-04-15 18:04:48.016717] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:59.080 [2024-04-15 18:04:48.016820] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:59.080 true 00:17:59.339 18:04:48 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58096651-048a-42ce-a203-fa0ae89fc277 00:17:59.339 18:04:48 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:59.597 18:04:48 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:59.597 18:04:48 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:59.856 18:04:48 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0d1221bd-20ed-4f26-8ba1-e1e1f6966283 00:18:00.426 18:04:49 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:00.686 18:04:49 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:00.945 18:04:49 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3314359 00:18:00.945 18:04:49 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:00.945 18:04:49 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:00.945 18:04:49 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3314359 /var/tmp/bdevperf.sock 00:18:00.945 18:04:49 -- common/autotest_common.sh@817 -- # '[' -z 3314359 ']' 00:18:00.945 18:04:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.945 18:04:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:00.945 18:04:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.945 18:04:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:00.945 18:04:49 -- common/autotest_common.sh@10 -- # set +x 00:18:00.945 [2024-04-15 18:04:49.725774] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:18:00.945 [2024-04-15 18:04:49.725869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3314359 ] 00:18:00.945 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.945 [2024-04-15 18:04:49.795814] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.945 [2024-04-15 18:04:49.886644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.203 18:04:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:01.203 18:04:49 -- common/autotest_common.sh@850 -- # return 0 00:18:01.203 18:04:49 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:01.769 Nvme0n1 00:18:01.769 18:04:50 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:02.029 [ 00:18:02.029 { 00:18:02.029 "name": "Nvme0n1", 00:18:02.029 "aliases": [ 00:18:02.029 "0d1221bd-20ed-4f26-8ba1-e1e1f6966283" 00:18:02.029 ], 00:18:02.029 "product_name": "NVMe disk", 00:18:02.029 "block_size": 4096, 00:18:02.029 "num_blocks": 38912, 00:18:02.029 "uuid": "0d1221bd-20ed-4f26-8ba1-e1e1f6966283", 00:18:02.029 "assigned_rate_limits": { 00:18:02.029 "rw_ios_per_sec": 0, 00:18:02.029 "rw_mbytes_per_sec": 0, 00:18:02.029 "r_mbytes_per_sec": 0, 00:18:02.029 "w_mbytes_per_sec": 0 00:18:02.029 }, 00:18:02.029 "claimed": false, 00:18:02.029 "zoned": false, 00:18:02.029 "supported_io_types": { 00:18:02.029 "read": true, 00:18:02.029 "write": true, 00:18:02.029 "unmap": true, 00:18:02.029 "write_zeroes": true, 00:18:02.029 "flush": true, 00:18:02.029 "reset": true, 00:18:02.029 "compare": true, 00:18:02.029 "compare_and_write": true, 00:18:02.029 "abort": true, 00:18:02.029 "nvme_admin": true, 00:18:02.029 "nvme_io": true 00:18:02.029 }, 00:18:02.029 "memory_domains": [ 00:18:02.029 { 00:18:02.029 "dma_device_id": "system", 00:18:02.029 "dma_device_type": 1 00:18:02.029 } 00:18:02.029 ], 00:18:02.029 "driver_specific": { 00:18:02.029 "nvme": [ 00:18:02.029 { 00:18:02.029 "trid": { 00:18:02.029 "trtype": "TCP", 00:18:02.029 "adrfam": "IPv4", 00:18:02.029 "traddr": "10.0.0.2", 00:18:02.029 "trsvcid": "4420", 00:18:02.029 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:02.029 }, 00:18:02.029 "ctrlr_data": { 00:18:02.029 "cntlid": 1, 00:18:02.029 "vendor_id": "0x8086", 00:18:02.029 "model_number": "SPDK bdev Controller", 00:18:02.029 "serial_number": "SPDK0", 00:18:02.029 "firmware_revision": "24.05", 00:18:02.029 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:02.029 "oacs": { 00:18:02.029 "security": 0, 00:18:02.029 "format": 0, 00:18:02.029 "firmware": 0, 00:18:02.029 "ns_manage": 0 00:18:02.029 }, 00:18:02.029 "multi_ctrlr": true, 00:18:02.029 "ana_reporting": false 00:18:02.029 }, 00:18:02.029 "vs": { 00:18:02.029 "nvme_version": "1.3" 00:18:02.029 }, 00:18:02.029 "ns_data": { 00:18:02.029 "id": 1, 00:18:02.029 "can_share": true 00:18:02.029 } 00:18:02.029 } 00:18:02.029 ], 00:18:02.029 "mp_policy": "active_passive" 00:18:02.029 } 00:18:02.029 } 00:18:02.029 ] 00:18:02.029 18:04:50 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3314501 00:18:02.029 18:04:50 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:02.029 18:04:50 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:02.288 Running I/O for 10 seconds... 00:18:03.223 Latency(us) 00:18:03.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:03.223 Nvme0n1 : 1.00 12879.00 50.31 0.00 0.00 0.00 0.00 0.00 00:18:03.223 =================================================================================================================== 00:18:03.223 Total : 12879.00 50.31 0.00 0.00 0.00 0.00 0.00 00:18:03.223 00:18:04.156 18:04:52 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 58096651-048a-42ce-a203-fa0ae89fc277 00:18:04.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.156 Nvme0n1 : 2.00 13047.50 50.97 0.00 0.00 0.00 0.00 0.00 00:18:04.156 =================================================================================================================== 00:18:04.156 Total : 13047.50 50.97 0.00 0.00 0.00 0.00 0.00 00:18:04.156 00:18:04.415 true 00:18:04.415 18:04:53 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58096651-048a-42ce-a203-fa0ae89fc277 00:18:04.415 18:04:53 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:04.674 18:04:53 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:04.674 18:04:53 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:04.674 18:04:53 -- target/nvmf_lvs_grow.sh@65 -- # wait 3314501 00:18:05.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:05.242 Nvme0n1 : 3.00 13167.67 51.44 0.00 0.00 0.00 0.00 0.00 00:18:05.242 =================================================================================================================== 00:18:05.242 Total : 13167.67 51.44 0.00 0.00 0.00 0.00 0.00 00:18:05.242 00:18:06.177 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:06.177 Nvme0n1 : 4.00 13219.75 51.64 0.00 0.00 0.00 0.00 0.00 00:18:06.177 =================================================================================================================== 00:18:06.177 Total : 13219.75 51.64 0.00 0.00 0.00 0.00 0.00 00:18:06.177 00:18:07.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:07.113 Nvme0n1 : 5.00 13255.80 51.78 0.00 0.00 0.00 0.00 0.00 00:18:07.113 =================================================================================================================== 00:18:07.113 Total : 13255.80 51.78 0.00 0.00 0.00 0.00 0.00 00:18:07.113 00:18:08.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:08.490 Nvme0n1 : 6.00 13291.83 51.92 0.00 0.00 0.00 0.00 0.00 00:18:08.490 =================================================================================================================== 00:18:08.490 Total : 13291.83 51.92 0.00 0.00 0.00 0.00 0.00 00:18:08.490 00:18:09.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.426 Nvme0n1 : 7.00 13319.86 52.03 0.00 0.00 0.00 0.00 0.00 00:18:09.426 =================================================================================================================== 00:18:09.426 Total : 13319.86 52.03 0.00 0.00 0.00 0.00 0.00 00:18:09.426 00:18:10.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:10.417 Nvme0n1 : 8.00 13335.88 52.09 0.00 0.00 0.00 0.00 0.00 00:18:10.417 =================================================================================================================== 00:18:10.417 Total : 13335.88 52.09 0.00 0.00 0.00 0.00 0.00 00:18:10.417 00:18:11.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:11.353 Nvme0n1 : 9.00 13359.89 52.19 0.00 0.00 0.00 0.00 0.00 00:18:11.353 =================================================================================================================== 00:18:11.353 Total : 13359.89 52.19 0.00 0.00 0.00 0.00 0.00 00:18:11.353 00:18:12.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:12.290 Nvme0n1 : 10.00 13373.50 52.24 0.00 0.00 0.00 0.00 0.00 00:18:12.290 =================================================================================================================== 00:18:12.290 Total : 13373.50 52.24 0.00 0.00 0.00 0.00 0.00 00:18:12.290 00:18:12.290 00:18:12.290 Latency(us) 00:18:12.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:12.290 Nvme0n1 : 10.01 13373.56 52.24 0.00 0.00 9562.91 5145.79 15437.37 00:18:12.290 =================================================================================================================== 00:18:12.290 Total : 13373.56 52.24 0.00 0.00 9562.91 5145.79 15437.37 00:18:12.290 0 00:18:12.290 18:05:01 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3314359 00:18:12.290 18:05:01 -- common/autotest_common.sh@936 -- # '[' -z 3314359 ']' 00:18:12.290 18:05:01 -- common/autotest_common.sh@940 -- # kill -0 3314359 00:18:12.290 18:05:01 -- common/autotest_common.sh@941 -- # uname 00:18:12.290 18:05:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:12.290 18:05:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3314359 00:18:12.290 18:05:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:12.290 18:05:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:12.290 18:05:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3314359' 00:18:12.290 killing process with pid 3314359 00:18:12.290 18:05:01 -- common/autotest_common.sh@955 -- # kill 3314359 00:18:12.290 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.290 00:18:12.290 Latency(us) 00:18:12.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.290 =================================================================================================================== 00:18:12.290 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.290 18:05:01 -- common/autotest_common.sh@960 -- # wait 3314359 00:18:12.550 18:05:01 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:13.117 18:05:01 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58096651-048a-42ce-a203-fa0ae89fc277 00:18:13.117 18:05:01 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:13.375 18:05:02 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:13.375 18:05:02 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:13.375 18:05:02 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3311726 00:18:13.375 18:05:02 -- target/nvmf_lvs_grow.sh@74 -- # wait 3311726 00:18:13.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3311726 Killed "${NVMF_APP[@]}" "$@" 00:18:13.375 18:05:02 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:13.375 18:05:02 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:13.375 18:05:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:13.375 18:05:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:13.375 18:05:02 -- common/autotest_common.sh@10 -- # set +x 00:18:13.375 18:05:02 -- nvmf/common.sh@470 -- # nvmfpid=3315828 00:18:13.375 18:05:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:13.375 18:05:02 -- nvmf/common.sh@471 -- # waitforlisten 3315828 00:18:13.375 18:05:02 -- common/autotest_common.sh@817 -- # '[' -z 3315828 ']' 00:18:13.375 18:05:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.375 18:05:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:13.375 18:05:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.375 18:05:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:13.375 18:05:02 -- common/autotest_common.sh@10 -- # set +x 00:18:13.375 [2024-04-15 18:05:02.234128] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:18:13.375 [2024-04-15 18:05:02.234223] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.375 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.633 [2024-04-15 18:05:02.345333] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.633 [2024-04-15 18:05:02.439508] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.633 [2024-04-15 18:05:02.439582] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.633 [2024-04-15 18:05:02.439598] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.633 [2024-04-15 18:05:02.439612] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.633 [2024-04-15 18:05:02.439625] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.633 [2024-04-15 18:05:02.439657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.633 18:05:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:13.633 18:05:02 -- common/autotest_common.sh@850 -- # return 0 00:18:13.633 18:05:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:13.633 18:05:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:13.633 18:05:02 -- common/autotest_common.sh@10 -- # set +x 00:18:13.891 18:05:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.891 18:05:02 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:14.150 [2024-04-15 18:05:02.900541] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:14.150 [2024-04-15 18:05:02.900680] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:14.150 [2024-04-15 18:05:02.900737] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:14.150 18:05:02 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:14.150 18:05:02 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 0d1221bd-20ed-4f26-8ba1-e1e1f6966283 00:18:14.150 18:05:02 -- common/autotest_common.sh@885 -- # local bdev_name=0d1221bd-20ed-4f26-8ba1-e1e1f6966283 00:18:14.150 18:05:02 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:14.150 18:05:02 -- common/autotest_common.sh@887 -- # local i 00:18:14.150 18:05:02 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:14.150 18:05:02 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:14.150 18:05:02 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:14.410 18:05:03 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0d1221bd-20ed-4f26-8ba1-e1e1f6966283 -t 2000 00:18:14.671 [ 00:18:14.671 { 00:18:14.671 "name": "0d1221bd-20ed-4f26-8ba1-e1e1f6966283", 00:18:14.671 "aliases": [ 00:18:14.671 "lvs/lvol" 00:18:14.671 ], 00:18:14.671 "product_name": "Logical Volume", 00:18:14.671 "block_size": 4096, 00:18:14.671 "num_blocks": 38912, 00:18:14.671 "uuid": "0d1221bd-20ed-4f26-8ba1-e1e1f6966283", 00:18:14.671 "assigned_rate_limits": { 00:18:14.671 "rw_ios_per_sec": 0, 00:18:14.671 "rw_mbytes_per_sec": 0, 00:18:14.671 "r_mbytes_per_sec": 0, 00:18:14.671 "w_mbytes_per_sec": 0 00:18:14.671 }, 00:18:14.671 "claimed": false, 00:18:14.671 "zoned": false, 00:18:14.671 "supported_io_types": { 00:18:14.671 "read": true, 00:18:14.671 "write": true, 00:18:14.671 "unmap": true, 00:18:14.671 "write_zeroes": true, 00:18:14.671 "flush": false, 00:18:14.671 "reset": true, 00:18:14.671 "compare": false, 00:18:14.671 "compare_and_write": false, 00:18:14.671 "abort": false, 00:18:14.671 "nvme_admin": false, 00:18:14.671 "nvme_io": false 00:18:14.671 }, 00:18:14.671 "driver_specific": { 00:18:14.671 "lvol": { 00:18:14.671 "lvol_store_uuid": "58096651-048a-42ce-a203-fa0ae89fc277", 00:18:14.671 "base_bdev": "aio_bdev", 00:18:14.671 "thin_provision": false, 00:18:14.671 "snapshot": false, 00:18:14.671 "clone": false, 00:18:14.671 "esnap_clone": false 00:18:14.671 } 00:18:14.671 } 00:18:14.671 } 00:18:14.671 ] 00:18:14.671 18:05:03 -- common/autotest_common.sh@893 -- # return 0 00:18:14.671 18:05:03 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:14.671 18:05:03 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58096651-048a-42ce-a203-fa0ae89fc277 00:18:15.238 18:05:03 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:15.238 18:05:03 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58096651-048a-42ce-a203-fa0ae89fc277 00:18:15.238 18:05:03 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:15.497 18:05:04 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:15.497 18:05:04 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:15.758 [2024-04-15 18:05:04.554554] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:15.758 18:05:04 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58096651-048a-42ce-a203-fa0ae89fc277 00:18:15.758 18:05:04 -- common/autotest_common.sh@638 -- # local es=0 00:18:15.758 18:05:04 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58096651-048a-42ce-a203-fa0ae89fc277 00:18:15.758 18:05:04 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.758 18:05:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:15.758 18:05:04 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.758 18:05:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:15.758 18:05:04 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.758 18:05:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:15.758 18:05:04 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.758 18:05:04 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:15.758 18:05:04 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58096651-048a-42ce-a203-fa0ae89fc277 00:18:16.326 request: 00:18:16.326 { 00:18:16.326 "uuid": "58096651-048a-42ce-a203-fa0ae89fc277", 00:18:16.326 "method": "bdev_lvol_get_lvstores", 00:18:16.326 "req_id": 1 00:18:16.326 } 00:18:16.326 Got JSON-RPC error response 00:18:16.326 response: 00:18:16.326 { 00:18:16.326 "code": -19, 00:18:16.326 "message": "No such device" 00:18:16.326 } 00:18:16.326 18:05:05 -- common/autotest_common.sh@641 -- # es=1 00:18:16.326 18:05:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:16.326 18:05:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:16.326 18:05:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:16.326 18:05:05 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:16.584 aio_bdev 00:18:16.585 18:05:05 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 0d1221bd-20ed-4f26-8ba1-e1e1f6966283 00:18:16.585 18:05:05 -- common/autotest_common.sh@885 -- # local bdev_name=0d1221bd-20ed-4f26-8ba1-e1e1f6966283 00:18:16.585 18:05:05 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:16.585 18:05:05 -- common/autotest_common.sh@887 -- # local i 00:18:16.585 18:05:05 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:16.585 18:05:05 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:16.585 18:05:05 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:16.844 18:05:05 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0d1221bd-20ed-4f26-8ba1-e1e1f6966283 -t 2000 00:18:17.103 [ 00:18:17.103 { 00:18:17.103 "name": "0d1221bd-20ed-4f26-8ba1-e1e1f6966283", 00:18:17.103 "aliases": [ 00:18:17.103 "lvs/lvol" 00:18:17.103 ], 00:18:17.103 "product_name": "Logical Volume", 00:18:17.103 "block_size": 4096, 00:18:17.103 "num_blocks": 38912, 00:18:17.103 "uuid": "0d1221bd-20ed-4f26-8ba1-e1e1f6966283", 00:18:17.103 "assigned_rate_limits": { 00:18:17.103 "rw_ios_per_sec": 0, 00:18:17.103 "rw_mbytes_per_sec": 0, 00:18:17.103 "r_mbytes_per_sec": 0, 00:18:17.103 "w_mbytes_per_sec": 0 00:18:17.103 }, 00:18:17.103 "claimed": false, 00:18:17.103 "zoned": false, 00:18:17.103 "supported_io_types": { 00:18:17.103 "read": true, 00:18:17.103 "write": true, 00:18:17.103 "unmap": true, 00:18:17.103 "write_zeroes": true, 00:18:17.103 "flush": false, 00:18:17.103 "reset": true, 00:18:17.103 "compare": false, 00:18:17.103 "compare_and_write": false, 00:18:17.103 "abort": false, 00:18:17.103 "nvme_admin": false, 00:18:17.103 "nvme_io": false 00:18:17.103 }, 00:18:17.103 "driver_specific": { 00:18:17.103 "lvol": { 00:18:17.103 "lvol_store_uuid": "58096651-048a-42ce-a203-fa0ae89fc277", 00:18:17.103 "base_bdev": "aio_bdev", 00:18:17.103 "thin_provision": false, 00:18:17.103 "snapshot": false, 00:18:17.103 "clone": false, 00:18:17.103 "esnap_clone": false 00:18:17.103 } 00:18:17.103 } 00:18:17.103 } 00:18:17.103 ] 00:18:17.103 18:05:05 -- common/autotest_common.sh@893 -- # return 0 00:18:17.103 18:05:05 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58096651-048a-42ce-a203-fa0ae89fc277 00:18:17.103 18:05:05 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:17.362 18:05:06 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:17.362 18:05:06 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58096651-048a-42ce-a203-fa0ae89fc277 00:18:17.362 18:05:06 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:17.622 18:05:06 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:17.622 18:05:06 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0d1221bd-20ed-4f26-8ba1-e1e1f6966283 00:18:18.191 18:05:06 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 58096651-048a-42ce-a203-fa0ae89fc277 00:18:18.450 18:05:07 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:18.709 18:05:07 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:18.709 00:18:18.709 real 0m21.036s 00:18:18.709 user 0m51.439s 00:18:18.709 sys 0m5.749s 00:18:18.709 18:05:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:18.709 18:05:07 -- common/autotest_common.sh@10 -- # set +x 00:18:18.709 ************************************ 00:18:18.709 END TEST lvs_grow_dirty 00:18:18.709 ************************************ 00:18:18.709 18:05:07 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:18.709 18:05:07 -- common/autotest_common.sh@794 -- # type=--id 00:18:18.709 18:05:07 -- common/autotest_common.sh@795 -- # id=0 00:18:18.709 18:05:07 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:18.709 18:05:07 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:18.709 18:05:07 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:18.709 18:05:07 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:18.709 18:05:07 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:18.709 18:05:07 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:18.709 nvmf_trace.0 00:18:18.709 18:05:07 -- common/autotest_common.sh@809 -- # return 0 00:18:18.709 18:05:07 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:18.709 18:05:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:18.709 18:05:07 -- nvmf/common.sh@117 -- # sync 00:18:18.709 18:05:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:18.709 18:05:07 -- nvmf/common.sh@120 -- # set +e 00:18:18.709 18:05:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:18.709 18:05:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:18.709 rmmod nvme_tcp 00:18:18.969 rmmod nvme_fabrics 00:18:18.969 rmmod nvme_keyring 00:18:18.969 18:05:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:18.969 18:05:07 -- nvmf/common.sh@124 -- # set -e 00:18:18.969 18:05:07 -- nvmf/common.sh@125 -- # return 0 00:18:18.969 18:05:07 -- nvmf/common.sh@478 -- # '[' -n 3315828 ']' 00:18:18.969 18:05:07 -- nvmf/common.sh@479 -- # killprocess 3315828 00:18:18.969 18:05:07 -- common/autotest_common.sh@936 -- # '[' -z 3315828 ']' 00:18:18.969 18:05:07 -- common/autotest_common.sh@940 -- # kill -0 3315828 00:18:18.969 18:05:07 -- common/autotest_common.sh@941 -- # uname 00:18:18.969 18:05:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.969 18:05:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3315828 00:18:18.969 18:05:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:18.969 18:05:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:18.969 18:05:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3315828' 00:18:18.969 killing process with pid 3315828 00:18:18.969 18:05:07 -- common/autotest_common.sh@955 -- # kill 3315828 00:18:18.969 18:05:07 -- common/autotest_common.sh@960 -- # wait 3315828 00:18:19.230 18:05:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:19.230 18:05:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:19.230 18:05:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:19.230 18:05:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.230 18:05:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:19.230 18:05:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.230 18:05:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.230 18:05:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.129 18:05:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:21.129 00:18:21.129 real 0m45.346s 00:18:21.129 user 1m16.405s 00:18:21.129 sys 0m10.284s 00:18:21.129 18:05:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:21.129 18:05:10 -- common/autotest_common.sh@10 -- # set +x 00:18:21.129 ************************************ 00:18:21.129 END TEST nvmf_lvs_grow 00:18:21.129 ************************************ 00:18:21.129 18:05:10 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:21.129 18:05:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:21.129 18:05:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:21.129 18:05:10 -- common/autotest_common.sh@10 -- # set +x 00:18:21.388 ************************************ 00:18:21.388 START TEST nvmf_bdev_io_wait 00:18:21.388 ************************************ 00:18:21.388 18:05:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:21.388 * Looking for test storage... 00:18:21.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:21.388 18:05:10 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:21.388 18:05:10 -- nvmf/common.sh@7 -- # uname -s 00:18:21.388 18:05:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:21.388 18:05:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:21.388 18:05:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:21.388 18:05:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:21.388 18:05:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:21.388 18:05:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:21.388 18:05:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:21.388 18:05:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:21.388 18:05:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:21.388 18:05:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:21.388 18:05:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:21.388 18:05:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:21.388 18:05:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:21.388 18:05:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:21.388 18:05:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:21.388 18:05:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:21.388 18:05:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:21.388 18:05:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.388 18:05:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.388 18:05:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.389 18:05:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.389 18:05:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.389 18:05:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.389 18:05:10 -- paths/export.sh@5 -- # export PATH 00:18:21.389 18:05:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.389 18:05:10 -- nvmf/common.sh@47 -- # : 0 00:18:21.389 18:05:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:21.389 18:05:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:21.389 18:05:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:21.389 18:05:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:21.389 18:05:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:21.389 18:05:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:21.389 18:05:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:21.389 18:05:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:21.389 18:05:10 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:21.389 18:05:10 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:21.389 18:05:10 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:21.389 18:05:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:21.389 18:05:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:21.389 18:05:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:21.389 18:05:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:21.389 18:05:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:21.389 18:05:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.389 18:05:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:21.389 18:05:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.389 18:05:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:21.389 18:05:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:21.389 18:05:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:21.389 18:05:10 -- common/autotest_common.sh@10 -- # set +x 00:18:23.919 18:05:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:23.919 18:05:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:23.919 18:05:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:23.919 18:05:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:23.919 18:05:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:23.919 18:05:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:23.919 18:05:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:23.919 18:05:12 -- nvmf/common.sh@295 -- # net_devs=() 00:18:23.919 18:05:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:23.919 18:05:12 -- nvmf/common.sh@296 -- # e810=() 00:18:23.919 18:05:12 -- nvmf/common.sh@296 -- # local -ga e810 00:18:23.919 18:05:12 -- nvmf/common.sh@297 -- # x722=() 00:18:23.919 18:05:12 -- nvmf/common.sh@297 -- # local -ga x722 00:18:23.919 18:05:12 -- nvmf/common.sh@298 -- # mlx=() 00:18:23.919 18:05:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:23.919 18:05:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:23.919 18:05:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:23.919 18:05:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:23.919 18:05:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:23.919 18:05:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:23.919 18:05:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:23.919 18:05:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:23.919 18:05:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:23.919 18:05:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:23.919 18:05:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:23.919 18:05:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:23.919 18:05:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:23.919 18:05:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:23.919 18:05:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:23.919 18:05:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.919 18:05:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:23.919 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:23.919 18:05:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.919 18:05:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:23.919 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:23.919 18:05:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:23.919 18:05:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.919 18:05:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.919 18:05:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:23.919 18:05:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.919 18:05:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:23.919 Found net devices under 0000:84:00.0: cvl_0_0 00:18:23.919 18:05:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.919 18:05:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.919 18:05:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.919 18:05:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:23.919 18:05:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.919 18:05:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:23.919 Found net devices under 0000:84:00.1: cvl_0_1 00:18:23.919 18:05:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.919 18:05:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:23.919 18:05:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:23.919 18:05:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:23.919 18:05:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:23.919 18:05:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:23.919 18:05:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:23.919 18:05:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:23.919 18:05:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:23.919 18:05:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:23.919 18:05:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:23.919 18:05:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:23.919 18:05:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:23.919 18:05:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:23.919 18:05:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:23.919 18:05:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:23.919 18:05:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:23.919 18:05:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:23.919 18:05:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:23.919 18:05:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:23.919 18:05:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:23.919 18:05:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:23.919 18:05:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:23.919 18:05:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:23.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:18:23.919 00:18:23.919 --- 10.0.0.2 ping statistics --- 00:18:23.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.919 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:18:23.919 18:05:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:23.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:18:23.919 00:18:23.919 --- 10.0.0.1 ping statistics --- 00:18:23.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.919 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:18:23.919 18:05:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.919 18:05:12 -- nvmf/common.sh@411 -- # return 0 00:18:23.919 18:05:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:23.919 18:05:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.919 18:05:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:23.919 18:05:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.919 18:05:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:23.920 18:05:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:23.920 18:05:12 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:23.920 18:05:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:23.920 18:05:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:23.920 18:05:12 -- common/autotest_common.sh@10 -- # set +x 00:18:23.920 18:05:12 -- nvmf/common.sh@470 -- # nvmfpid=3318498 00:18:23.920 18:05:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:23.920 18:05:12 -- nvmf/common.sh@471 -- # waitforlisten 3318498 00:18:23.920 18:05:12 -- common/autotest_common.sh@817 -- # '[' -z 3318498 ']' 00:18:23.920 18:05:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.920 18:05:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:23.920 18:05:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.920 18:05:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:23.920 18:05:12 -- common/autotest_common.sh@10 -- # set +x 00:18:23.920 [2024-04-15 18:05:12.575821] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:18:23.920 [2024-04-15 18:05:12.575910] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.920 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.920 [2024-04-15 18:05:12.655401] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:23.920 [2024-04-15 18:05:12.753446] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.920 [2024-04-15 18:05:12.753514] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.920 [2024-04-15 18:05:12.753531] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.920 [2024-04-15 18:05:12.753550] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.920 [2024-04-15 18:05:12.753563] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.920 [2024-04-15 18:05:12.753631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.920 [2024-04-15 18:05:12.753662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.920 [2024-04-15 18:05:12.753713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:23.920 [2024-04-15 18:05:12.753716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.920 18:05:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:23.920 18:05:12 -- common/autotest_common.sh@850 -- # return 0 00:18:23.920 18:05:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:23.920 18:05:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:23.920 18:05:12 -- common/autotest_common.sh@10 -- # set +x 00:18:23.920 18:05:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.920 18:05:12 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:23.920 18:05:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:23.920 18:05:12 -- common/autotest_common.sh@10 -- # set +x 00:18:23.920 18:05:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:23.920 18:05:12 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:23.920 18:05:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:23.920 18:05:12 -- common/autotest_common.sh@10 -- # set +x 00:18:24.178 18:05:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.178 18:05:12 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:24.178 18:05:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.178 18:05:12 -- common/autotest_common.sh@10 -- # set +x 00:18:24.178 [2024-04-15 18:05:12.949777] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.178 18:05:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.178 18:05:12 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:24.178 18:05:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.178 18:05:12 -- common/autotest_common.sh@10 -- # set +x 00:18:24.178 Malloc0 00:18:24.178 18:05:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.178 18:05:12 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:24.178 18:05:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.178 18:05:12 -- common/autotest_common.sh@10 -- # set +x 00:18:24.178 18:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.178 18:05:13 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:24.178 18:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.178 18:05:13 -- common/autotest_common.sh@10 -- # set +x 00:18:24.178 18:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.178 18:05:13 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.178 18:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.178 18:05:13 -- common/autotest_common.sh@10 -- # set +x 00:18:24.178 [2024-04-15 18:05:13.014021] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.178 18:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.178 18:05:13 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3318532 00:18:24.178 18:05:13 -- target/bdev_io_wait.sh@30 -- # READ_PID=3318534 00:18:24.178 18:05:13 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:24.178 18:05:13 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:24.178 18:05:13 -- nvmf/common.sh@521 -- # config=() 00:18:24.178 18:05:13 -- nvmf/common.sh@521 -- # local subsystem config 00:18:24.178 18:05:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:24.178 18:05:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:24.178 { 00:18:24.178 "params": { 00:18:24.179 "name": "Nvme$subsystem", 00:18:24.179 "trtype": "$TEST_TRANSPORT", 00:18:24.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:24.179 "adrfam": "ipv4", 00:18:24.179 "trsvcid": "$NVMF_PORT", 00:18:24.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:24.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:24.179 "hdgst": ${hdgst:-false}, 00:18:24.179 "ddgst": ${ddgst:-false} 00:18:24.179 }, 00:18:24.179 "method": "bdev_nvme_attach_controller" 00:18:24.179 } 00:18:24.179 EOF 00:18:24.179 )") 00:18:24.179 18:05:13 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:24.179 18:05:13 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:24.179 18:05:13 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3318537 00:18:24.179 18:05:13 -- nvmf/common.sh@521 -- # config=() 00:18:24.179 18:05:13 -- nvmf/common.sh@521 -- # local subsystem config 00:18:24.179 18:05:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:24.179 18:05:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:24.179 { 00:18:24.179 "params": { 00:18:24.179 "name": "Nvme$subsystem", 00:18:24.179 "trtype": "$TEST_TRANSPORT", 00:18:24.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:24.179 "adrfam": "ipv4", 00:18:24.179 "trsvcid": "$NVMF_PORT", 00:18:24.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:24.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:24.179 "hdgst": ${hdgst:-false}, 00:18:24.179 "ddgst": ${ddgst:-false} 00:18:24.179 }, 00:18:24.179 "method": "bdev_nvme_attach_controller" 00:18:24.179 } 00:18:24.179 EOF 00:18:24.179 )") 00:18:24.179 18:05:13 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:24.179 18:05:13 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:24.179 18:05:13 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3318540 00:18:24.179 18:05:13 -- nvmf/common.sh@521 -- # config=() 00:18:24.179 18:05:13 -- target/bdev_io_wait.sh@35 -- # sync 00:18:24.179 18:05:13 -- nvmf/common.sh@543 -- # cat 00:18:24.179 18:05:13 -- nvmf/common.sh@521 -- # local subsystem config 00:18:24.179 18:05:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:24.179 18:05:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:24.179 { 00:18:24.179 "params": { 00:18:24.179 "name": "Nvme$subsystem", 00:18:24.179 "trtype": "$TEST_TRANSPORT", 00:18:24.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:24.179 "adrfam": "ipv4", 00:18:24.179 "trsvcid": "$NVMF_PORT", 00:18:24.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:24.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:24.179 "hdgst": ${hdgst:-false}, 00:18:24.179 "ddgst": ${ddgst:-false} 00:18:24.179 }, 00:18:24.179 "method": "bdev_nvme_attach_controller" 00:18:24.179 } 00:18:24.179 EOF 00:18:24.179 )") 00:18:24.179 18:05:13 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:24.179 18:05:13 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:24.179 18:05:13 -- nvmf/common.sh@543 -- # cat 00:18:24.179 18:05:13 -- nvmf/common.sh@521 -- # config=() 00:18:24.179 18:05:13 -- nvmf/common.sh@521 -- # local subsystem config 00:18:24.179 18:05:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:24.179 18:05:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:24.179 { 00:18:24.179 "params": { 00:18:24.179 "name": "Nvme$subsystem", 00:18:24.179 "trtype": "$TEST_TRANSPORT", 00:18:24.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:24.179 "adrfam": "ipv4", 00:18:24.179 "trsvcid": "$NVMF_PORT", 00:18:24.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:24.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:24.179 "hdgst": ${hdgst:-false}, 00:18:24.179 "ddgst": ${ddgst:-false} 00:18:24.179 }, 00:18:24.179 "method": "bdev_nvme_attach_controller" 00:18:24.179 } 00:18:24.179 EOF 00:18:24.179 )") 00:18:24.179 18:05:13 -- nvmf/common.sh@543 -- # cat 00:18:24.179 18:05:13 -- target/bdev_io_wait.sh@37 -- # wait 3318532 00:18:24.179 18:05:13 -- nvmf/common.sh@543 -- # cat 00:18:24.179 18:05:13 -- nvmf/common.sh@545 -- # jq . 00:18:24.179 18:05:13 -- nvmf/common.sh@545 -- # jq . 00:18:24.179 18:05:13 -- nvmf/common.sh@545 -- # jq . 00:18:24.179 18:05:13 -- nvmf/common.sh@546 -- # IFS=, 00:18:24.179 18:05:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:24.179 "params": { 00:18:24.179 "name": "Nvme1", 00:18:24.179 "trtype": "tcp", 00:18:24.179 "traddr": "10.0.0.2", 00:18:24.179 "adrfam": "ipv4", 00:18:24.179 "trsvcid": "4420", 00:18:24.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.179 "hdgst": false, 00:18:24.179 "ddgst": false 00:18:24.179 }, 00:18:24.179 "method": "bdev_nvme_attach_controller" 00:18:24.179 }' 00:18:24.179 18:05:13 -- nvmf/common.sh@545 -- # jq . 00:18:24.179 18:05:13 -- nvmf/common.sh@546 -- # IFS=, 00:18:24.179 18:05:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:24.179 "params": { 00:18:24.179 "name": "Nvme1", 00:18:24.179 "trtype": "tcp", 00:18:24.179 "traddr": "10.0.0.2", 00:18:24.179 "adrfam": "ipv4", 00:18:24.179 "trsvcid": "4420", 00:18:24.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.179 "hdgst": false, 00:18:24.179 "ddgst": false 00:18:24.179 }, 00:18:24.179 "method": "bdev_nvme_attach_controller" 00:18:24.179 }' 00:18:24.179 18:05:13 -- nvmf/common.sh@546 -- # IFS=, 00:18:24.179 18:05:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:24.179 "params": { 00:18:24.179 "name": "Nvme1", 00:18:24.179 "trtype": "tcp", 00:18:24.179 "traddr": "10.0.0.2", 00:18:24.179 "adrfam": "ipv4", 00:18:24.179 "trsvcid": "4420", 00:18:24.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.179 "hdgst": false, 00:18:24.179 "ddgst": false 00:18:24.179 }, 00:18:24.179 "method": "bdev_nvme_attach_controller" 00:18:24.179 }' 00:18:24.179 18:05:13 -- nvmf/common.sh@546 -- # IFS=, 00:18:24.179 18:05:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:24.179 "params": { 00:18:24.179 "name": "Nvme1", 00:18:24.179 "trtype": "tcp", 00:18:24.179 "traddr": "10.0.0.2", 00:18:24.179 "adrfam": "ipv4", 00:18:24.179 "trsvcid": "4420", 00:18:24.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.179 "hdgst": false, 00:18:24.179 "ddgst": false 00:18:24.179 }, 00:18:24.179 "method": "bdev_nvme_attach_controller" 00:18:24.179 }' 00:18:24.179 [2024-04-15 18:05:13.059889] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:18:24.179 [2024-04-15 18:05:13.059889] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:18:24.179 [2024-04-15 18:05:13.059971] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-15 18:05:13.059972] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:24.179 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:24.179 [2024-04-15 18:05:13.061859] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:18:24.179 [2024-04-15 18:05:13.061857] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:18:24.179 [2024-04-15 18:05:13.061947] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-15 18:05:13.061948] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:24.179 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:24.179 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.438 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.438 [2024-04-15 18:05:13.218459] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.438 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.438 [2024-04-15 18:05:13.286033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:24.438 [2024-04-15 18:05:13.292795] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.438 [2024-04-15 18:05:13.294771] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:18:24.438 [2024-04-15 18:05:13.361677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:24.438 [2024-04-15 18:05:13.370458] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:18:24.438 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.729 [2024-04-15 18:05:13.400482] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.729 [2024-04-15 18:05:13.477954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:24.729 [2024-04-15 18:05:13.486770] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:18:24.729 [2024-04-15 18:05:13.515461] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.729 [2024-04-15 18:05:13.594079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:24.729 [2024-04-15 18:05:13.602855] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:18:24.989 Running I/O for 1 seconds... 00:18:24.989 Running I/O for 1 seconds... 00:18:24.989 Running I/O for 1 seconds... 00:18:24.989 Running I/O for 1 seconds... 00:18:25.922 00:18:25.922 Latency(us) 00:18:25.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.922 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:25.922 Nvme1n1 : 1.00 202566.57 791.28 0.00 0.00 629.31 244.24 813.13 00:18:25.922 =================================================================================================================== 00:18:25.922 Total : 202566.57 791.28 0.00 0.00 629.31 244.24 813.13 00:18:25.922 00:18:25.922 Latency(us) 00:18:25.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.922 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:25.922 Nvme1n1 : 1.01 11679.10 45.62 0.00 0.00 10922.22 5898.24 20291.89 00:18:25.922 =================================================================================================================== 00:18:25.922 Total : 11679.10 45.62 0.00 0.00 10922.22 5898.24 20291.89 00:18:25.922 00:18:25.922 Latency(us) 00:18:25.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.922 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:25.922 Nvme1n1 : 1.01 8697.43 33.97 0.00 0.00 14652.60 8252.68 28156.21 00:18:25.922 =================================================================================================================== 00:18:25.923 Total : 8697.43 33.97 0.00 0.00 14652.60 8252.68 28156.21 00:18:26.180 00:18:26.180 Latency(us) 00:18:26.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.180 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:26.180 Nvme1n1 : 1.01 9279.25 36.25 0.00 0.00 13741.73 5218.61 24466.77 00:18:26.180 =================================================================================================================== 00:18:26.180 Total : 9279.25 36.25 0.00 0.00 13741.73 5218.61 24466.77 00:18:26.180 18:05:15 -- target/bdev_io_wait.sh@38 -- # wait 3318534 00:18:26.438 18:05:15 -- target/bdev_io_wait.sh@39 -- # wait 3318537 00:18:26.438 18:05:15 -- target/bdev_io_wait.sh@40 -- # wait 3318540 00:18:26.438 18:05:15 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:26.438 18:05:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.438 18:05:15 -- common/autotest_common.sh@10 -- # set +x 00:18:26.438 18:05:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.438 18:05:15 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:26.438 18:05:15 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:26.438 18:05:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:26.438 18:05:15 -- nvmf/common.sh@117 -- # sync 00:18:26.438 18:05:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:26.438 18:05:15 -- nvmf/common.sh@120 -- # set +e 00:18:26.438 18:05:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:26.438 18:05:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:26.438 rmmod nvme_tcp 00:18:26.438 rmmod nvme_fabrics 00:18:26.438 rmmod nvme_keyring 00:18:26.438 18:05:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:26.438 18:05:15 -- nvmf/common.sh@124 -- # set -e 00:18:26.438 18:05:15 -- nvmf/common.sh@125 -- # return 0 00:18:26.438 18:05:15 -- nvmf/common.sh@478 -- # '[' -n 3318498 ']' 00:18:26.438 18:05:15 -- nvmf/common.sh@479 -- # killprocess 3318498 00:18:26.438 18:05:15 -- common/autotest_common.sh@936 -- # '[' -z 3318498 ']' 00:18:26.438 18:05:15 -- common/autotest_common.sh@940 -- # kill -0 3318498 00:18:26.438 18:05:15 -- common/autotest_common.sh@941 -- # uname 00:18:26.438 18:05:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:26.438 18:05:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3318498 00:18:26.438 18:05:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:26.438 18:05:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:26.438 18:05:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3318498' 00:18:26.438 killing process with pid 3318498 00:18:26.438 18:05:15 -- common/autotest_common.sh@955 -- # kill 3318498 00:18:26.438 18:05:15 -- common/autotest_common.sh@960 -- # wait 3318498 00:18:26.697 18:05:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:26.697 18:05:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:26.697 18:05:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:26.697 18:05:15 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:26.697 18:05:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:26.697 18:05:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.697 18:05:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.697 18:05:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.597 18:05:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:28.597 00:18:28.597 real 0m7.363s 00:18:28.597 user 0m16.793s 00:18:28.597 sys 0m3.614s 00:18:28.597 18:05:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:28.597 18:05:17 -- common/autotest_common.sh@10 -- # set +x 00:18:28.597 ************************************ 00:18:28.597 END TEST nvmf_bdev_io_wait 00:18:28.597 ************************************ 00:18:28.856 18:05:17 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:28.856 18:05:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:28.856 18:05:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:28.856 18:05:17 -- common/autotest_common.sh@10 -- # set +x 00:18:28.856 ************************************ 00:18:28.856 START TEST nvmf_queue_depth 00:18:28.856 ************************************ 00:18:28.856 18:05:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:28.856 * Looking for test storage... 00:18:28.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:28.856 18:05:17 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:28.856 18:05:17 -- nvmf/common.sh@7 -- # uname -s 00:18:28.856 18:05:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.856 18:05:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.856 18:05:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.856 18:05:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.856 18:05:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.856 18:05:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.856 18:05:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.856 18:05:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.856 18:05:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.856 18:05:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.856 18:05:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:28.856 18:05:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:28.856 18:05:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.856 18:05:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.856 18:05:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:28.856 18:05:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.856 18:05:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:28.856 18:05:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.857 18:05:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.857 18:05:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.857 18:05:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.857 18:05:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.857 18:05:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.857 18:05:17 -- paths/export.sh@5 -- # export PATH 00:18:28.857 18:05:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.857 18:05:17 -- nvmf/common.sh@47 -- # : 0 00:18:28.857 18:05:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:28.857 18:05:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:28.857 18:05:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.857 18:05:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.857 18:05:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.857 18:05:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:28.857 18:05:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:28.857 18:05:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:28.857 18:05:17 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:28.857 18:05:17 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:28.857 18:05:17 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:28.857 18:05:17 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:28.857 18:05:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:28.857 18:05:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:28.857 18:05:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:28.857 18:05:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:28.857 18:05:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:28.857 18:05:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.857 18:05:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.857 18:05:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.857 18:05:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:28.857 18:05:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:28.857 18:05:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:28.857 18:05:17 -- common/autotest_common.sh@10 -- # set +x 00:18:31.388 18:05:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:31.388 18:05:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:31.388 18:05:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:31.388 18:05:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:31.388 18:05:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:31.388 18:05:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:31.388 18:05:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:31.388 18:05:20 -- nvmf/common.sh@295 -- # net_devs=() 00:18:31.388 18:05:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:31.388 18:05:20 -- nvmf/common.sh@296 -- # e810=() 00:18:31.388 18:05:20 -- nvmf/common.sh@296 -- # local -ga e810 00:18:31.388 18:05:20 -- nvmf/common.sh@297 -- # x722=() 00:18:31.388 18:05:20 -- nvmf/common.sh@297 -- # local -ga x722 00:18:31.388 18:05:20 -- nvmf/common.sh@298 -- # mlx=() 00:18:31.388 18:05:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:31.388 18:05:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.388 18:05:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.388 18:05:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.388 18:05:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.388 18:05:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.388 18:05:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.388 18:05:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.388 18:05:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.388 18:05:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.388 18:05:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.388 18:05:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.388 18:05:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:31.388 18:05:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:31.388 18:05:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:31.388 18:05:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:31.388 18:05:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:31.388 18:05:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:31.388 18:05:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.388 18:05:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:31.388 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:31.388 18:05:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.388 18:05:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.388 18:05:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.388 18:05:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.388 18:05:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.388 18:05:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.388 18:05:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:31.388 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:31.388 18:05:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.388 18:05:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.388 18:05:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.388 18:05:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.388 18:05:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.388 18:05:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:31.388 18:05:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:31.388 18:05:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:31.388 18:05:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.388 18:05:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.388 18:05:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:31.388 18:05:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.388 18:05:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:31.388 Found net devices under 0000:84:00.0: cvl_0_0 00:18:31.388 18:05:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.388 18:05:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.388 18:05:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.388 18:05:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:31.388 18:05:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.388 18:05:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:31.388 Found net devices under 0000:84:00.1: cvl_0_1 00:18:31.388 18:05:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.388 18:05:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:31.389 18:05:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:31.389 18:05:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:31.389 18:05:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:31.389 18:05:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:31.389 18:05:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.389 18:05:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.389 18:05:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.389 18:05:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:31.389 18:05:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.389 18:05:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.389 18:05:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:31.389 18:05:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.389 18:05:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.389 18:05:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:31.389 18:05:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:31.389 18:05:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.389 18:05:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:31.389 18:05:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:31.389 18:05:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:31.389 18:05:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:31.389 18:05:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:31.389 18:05:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:31.389 18:05:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:31.389 18:05:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:31.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:18:31.389 00:18:31.389 --- 10.0.0.2 ping statistics --- 00:18:31.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.389 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:18:31.389 18:05:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:31.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:18:31.389 00:18:31.389 --- 10.0.0.1 ping statistics --- 00:18:31.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.389 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:18:31.389 18:05:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.389 18:05:20 -- nvmf/common.sh@411 -- # return 0 00:18:31.389 18:05:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:31.389 18:05:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.389 18:05:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:31.389 18:05:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:31.389 18:05:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.389 18:05:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:31.389 18:05:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:31.389 18:05:20 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:31.389 18:05:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:31.389 18:05:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:31.389 18:05:20 -- common/autotest_common.sh@10 -- # set +x 00:18:31.389 18:05:20 -- nvmf/common.sh@470 -- # nvmfpid=3320897 00:18:31.389 18:05:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:31.389 18:05:20 -- nvmf/common.sh@471 -- # waitforlisten 3320897 00:18:31.389 18:05:20 -- common/autotest_common.sh@817 -- # '[' -z 3320897 ']' 00:18:31.389 18:05:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.389 18:05:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:31.389 18:05:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.389 18:05:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:31.389 18:05:20 -- common/autotest_common.sh@10 -- # set +x 00:18:31.389 [2024-04-15 18:05:20.323327] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:18:31.389 [2024-04-15 18:05:20.323439] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.648 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.648 [2024-04-15 18:05:20.407511] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.648 [2024-04-15 18:05:20.504711] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.648 [2024-04-15 18:05:20.504779] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.648 [2024-04-15 18:05:20.504796] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.648 [2024-04-15 18:05:20.504811] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.648 [2024-04-15 18:05:20.504823] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.648 [2024-04-15 18:05:20.504856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.906 18:05:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:31.906 18:05:20 -- common/autotest_common.sh@850 -- # return 0 00:18:31.906 18:05:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:31.906 18:05:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:31.906 18:05:20 -- common/autotest_common.sh@10 -- # set +x 00:18:31.906 18:05:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.906 18:05:20 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:31.906 18:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.906 18:05:20 -- common/autotest_common.sh@10 -- # set +x 00:18:31.906 [2024-04-15 18:05:20.654644] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.906 18:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.906 18:05:20 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:31.906 18:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.906 18:05:20 -- common/autotest_common.sh@10 -- # set +x 00:18:31.906 Malloc0 00:18:31.906 18:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.906 18:05:20 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:31.906 18:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.906 18:05:20 -- common/autotest_common.sh@10 -- # set +x 00:18:31.906 18:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.906 18:05:20 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:31.906 18:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.906 18:05:20 -- common/autotest_common.sh@10 -- # set +x 00:18:31.906 18:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.906 18:05:20 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:31.906 18:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.906 18:05:20 -- common/autotest_common.sh@10 -- # set +x 00:18:31.906 [2024-04-15 18:05:20.715893] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.906 18:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.906 18:05:20 -- target/queue_depth.sh@30 -- # bdevperf_pid=3320921 00:18:31.906 18:05:20 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:31.906 18:05:20 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:31.906 18:05:20 -- target/queue_depth.sh@33 -- # waitforlisten 3320921 /var/tmp/bdevperf.sock 00:18:31.906 18:05:20 -- common/autotest_common.sh@817 -- # '[' -z 3320921 ']' 00:18:31.906 18:05:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.906 18:05:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:31.906 18:05:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.906 18:05:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:31.906 18:05:20 -- common/autotest_common.sh@10 -- # set +x 00:18:31.907 [2024-04-15 18:05:20.762529] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:18:31.907 [2024-04-15 18:05:20.762601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3320921 ] 00:18:31.907 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.907 [2024-04-15 18:05:20.830265] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.164 [2024-04-15 18:05:20.922493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.164 18:05:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:32.164 18:05:21 -- common/autotest_common.sh@850 -- # return 0 00:18:32.164 18:05:21 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:32.164 18:05:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:32.164 18:05:21 -- common/autotest_common.sh@10 -- # set +x 00:18:32.422 NVMe0n1 00:18:32.422 18:05:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:32.422 18:05:21 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:32.679 Running I/O for 10 seconds... 00:18:42.665 00:18:42.665 Latency(us) 00:18:42.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.665 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:42.665 Verification LBA range: start 0x0 length 0x4000 00:18:42.665 NVMe0n1 : 10.09 8251.64 32.23 0.00 0.00 123458.14 23981.32 81944.27 00:18:42.665 =================================================================================================================== 00:18:42.665 Total : 8251.64 32.23 0.00 0.00 123458.14 23981.32 81944.27 00:18:42.665 0 00:18:42.665 18:05:31 -- target/queue_depth.sh@39 -- # killprocess 3320921 00:18:42.665 18:05:31 -- common/autotest_common.sh@936 -- # '[' -z 3320921 ']' 00:18:42.665 18:05:31 -- common/autotest_common.sh@940 -- # kill -0 3320921 00:18:42.665 18:05:31 -- common/autotest_common.sh@941 -- # uname 00:18:42.665 18:05:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:42.665 18:05:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3320921 00:18:42.665 18:05:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:42.665 18:05:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:42.665 18:05:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3320921' 00:18:42.665 killing process with pid 3320921 00:18:42.665 18:05:31 -- common/autotest_common.sh@955 -- # kill 3320921 00:18:42.665 Received shutdown signal, test time was about 10.000000 seconds 00:18:42.665 00:18:42.665 Latency(us) 00:18:42.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.665 =================================================================================================================== 00:18:42.665 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.665 18:05:31 -- common/autotest_common.sh@960 -- # wait 3320921 00:18:42.925 18:05:31 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:42.925 18:05:31 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:42.925 18:05:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:42.925 18:05:31 -- nvmf/common.sh@117 -- # sync 00:18:42.925 18:05:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:42.925 18:05:31 -- nvmf/common.sh@120 -- # set +e 00:18:42.925 18:05:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:42.925 18:05:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:42.925 rmmod nvme_tcp 00:18:42.925 rmmod nvme_fabrics 00:18:42.925 rmmod nvme_keyring 00:18:42.925 18:05:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.925 18:05:31 -- nvmf/common.sh@124 -- # set -e 00:18:42.925 18:05:31 -- nvmf/common.sh@125 -- # return 0 00:18:42.925 18:05:31 -- nvmf/common.sh@478 -- # '[' -n 3320897 ']' 00:18:42.925 18:05:31 -- nvmf/common.sh@479 -- # killprocess 3320897 00:18:42.925 18:05:31 -- common/autotest_common.sh@936 -- # '[' -z 3320897 ']' 00:18:42.925 18:05:31 -- common/autotest_common.sh@940 -- # kill -0 3320897 00:18:42.925 18:05:31 -- common/autotest_common.sh@941 -- # uname 00:18:42.925 18:05:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:42.925 18:05:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3320897 00:18:42.925 18:05:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:42.925 18:05:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:42.925 18:05:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3320897' 00:18:42.925 killing process with pid 3320897 00:18:42.925 18:05:31 -- common/autotest_common.sh@955 -- # kill 3320897 00:18:42.925 18:05:31 -- common/autotest_common.sh@960 -- # wait 3320897 00:18:43.183 18:05:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:43.183 18:05:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:43.443 18:05:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:43.443 18:05:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:43.443 18:05:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:43.443 18:05:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.443 18:05:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.443 18:05:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.344 18:05:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:45.344 00:18:45.344 real 0m16.506s 00:18:45.344 user 0m22.625s 00:18:45.344 sys 0m3.609s 00:18:45.344 18:05:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:45.344 18:05:34 -- common/autotest_common.sh@10 -- # set +x 00:18:45.344 ************************************ 00:18:45.344 END TEST nvmf_queue_depth 00:18:45.344 ************************************ 00:18:45.344 18:05:34 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:45.344 18:05:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:45.344 18:05:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:45.344 18:05:34 -- common/autotest_common.sh@10 -- # set +x 00:18:45.603 ************************************ 00:18:45.603 START TEST nvmf_multipath 00:18:45.603 ************************************ 00:18:45.603 18:05:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:45.603 * Looking for test storage... 00:18:45.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:45.603 18:05:34 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.603 18:05:34 -- nvmf/common.sh@7 -- # uname -s 00:18:45.603 18:05:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.603 18:05:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.603 18:05:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.603 18:05:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.603 18:05:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.603 18:05:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.603 18:05:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.603 18:05:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.603 18:05:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.603 18:05:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.603 18:05:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:45.603 18:05:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:45.603 18:05:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.603 18:05:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.603 18:05:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.603 18:05:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.603 18:05:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.603 18:05:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.603 18:05:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.603 18:05:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.603 18:05:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.603 18:05:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.603 18:05:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.603 18:05:34 -- paths/export.sh@5 -- # export PATH 00:18:45.603 18:05:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.603 18:05:34 -- nvmf/common.sh@47 -- # : 0 00:18:45.604 18:05:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:45.604 18:05:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:45.604 18:05:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.604 18:05:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.604 18:05:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.604 18:05:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:45.604 18:05:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:45.604 18:05:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:45.604 18:05:34 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:45.604 18:05:34 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:45.604 18:05:34 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:45.604 18:05:34 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:45.604 18:05:34 -- target/multipath.sh@43 -- # nvmftestinit 00:18:45.604 18:05:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:45.604 18:05:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.604 18:05:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:45.604 18:05:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:45.604 18:05:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:45.604 18:05:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.604 18:05:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.604 18:05:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.604 18:05:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:45.604 18:05:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:45.604 18:05:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:45.604 18:05:34 -- common/autotest_common.sh@10 -- # set +x 00:18:48.144 18:05:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:48.144 18:05:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:48.144 18:05:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:48.144 18:05:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:48.144 18:05:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:48.144 18:05:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:48.144 18:05:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:48.144 18:05:36 -- nvmf/common.sh@295 -- # net_devs=() 00:18:48.144 18:05:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:48.144 18:05:36 -- nvmf/common.sh@296 -- # e810=() 00:18:48.144 18:05:36 -- nvmf/common.sh@296 -- # local -ga e810 00:18:48.144 18:05:36 -- nvmf/common.sh@297 -- # x722=() 00:18:48.144 18:05:36 -- nvmf/common.sh@297 -- # local -ga x722 00:18:48.144 18:05:36 -- nvmf/common.sh@298 -- # mlx=() 00:18:48.144 18:05:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:48.144 18:05:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.144 18:05:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.144 18:05:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.144 18:05:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.144 18:05:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.144 18:05:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.144 18:05:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.144 18:05:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.144 18:05:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.144 18:05:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.144 18:05:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.144 18:05:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:48.144 18:05:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:48.144 18:05:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:48.144 18:05:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:48.144 18:05:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:48.144 18:05:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:48.144 18:05:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.144 18:05:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:48.144 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:48.144 18:05:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.144 18:05:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.144 18:05:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.144 18:05:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.144 18:05:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.144 18:05:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.144 18:05:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:48.144 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:48.144 18:05:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.144 18:05:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.144 18:05:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.144 18:05:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.144 18:05:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.144 18:05:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:48.144 18:05:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:48.144 18:05:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:48.144 18:05:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.144 18:05:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.144 18:05:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:48.144 18:05:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.144 18:05:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:48.144 Found net devices under 0000:84:00.0: cvl_0_0 00:18:48.144 18:05:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.144 18:05:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.144 18:05:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.144 18:05:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:48.144 18:05:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.144 18:05:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:48.144 Found net devices under 0000:84:00.1: cvl_0_1 00:18:48.144 18:05:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.144 18:05:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:48.144 18:05:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:48.144 18:05:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:48.144 18:05:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:48.144 18:05:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:48.144 18:05:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.144 18:05:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.144 18:05:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:48.144 18:05:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:48.144 18:05:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:48.144 18:05:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:48.144 18:05:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:48.144 18:05:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:48.144 18:05:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.144 18:05:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:48.144 18:05:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:48.144 18:05:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:48.144 18:05:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:48.144 18:05:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:48.144 18:05:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:48.144 18:05:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:48.144 18:05:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:48.144 18:05:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:48.144 18:05:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:48.144 18:05:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:48.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:18:48.144 00:18:48.144 --- 10.0.0.2 ping statistics --- 00:18:48.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.144 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:18:48.144 18:05:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:48.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:18:48.144 00:18:48.144 --- 10.0.0.1 ping statistics --- 00:18:48.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.144 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:18:48.145 18:05:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.145 18:05:36 -- nvmf/common.sh@411 -- # return 0 00:18:48.145 18:05:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:48.145 18:05:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.145 18:05:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:48.145 18:05:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:48.145 18:05:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.145 18:05:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:48.145 18:05:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:48.145 18:05:36 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:48.145 18:05:36 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:48.145 only one NIC for nvmf test 00:18:48.145 18:05:36 -- target/multipath.sh@47 -- # nvmftestfini 00:18:48.145 18:05:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:48.145 18:05:36 -- nvmf/common.sh@117 -- # sync 00:18:48.145 18:05:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:48.145 18:05:36 -- nvmf/common.sh@120 -- # set +e 00:18:48.145 18:05:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:48.145 18:05:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:48.145 rmmod nvme_tcp 00:18:48.145 rmmod nvme_fabrics 00:18:48.145 rmmod nvme_keyring 00:18:48.145 18:05:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:48.145 18:05:36 -- nvmf/common.sh@124 -- # set -e 00:18:48.145 18:05:36 -- nvmf/common.sh@125 -- # return 0 00:18:48.145 18:05:36 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:48.145 18:05:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:48.145 18:05:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:48.145 18:05:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:48.145 18:05:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:48.145 18:05:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:48.145 18:05:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.145 18:05:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.145 18:05:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.046 18:05:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:50.046 18:05:38 -- target/multipath.sh@48 -- # exit 0 00:18:50.046 18:05:38 -- target/multipath.sh@1 -- # nvmftestfini 00:18:50.046 18:05:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:50.046 18:05:38 -- nvmf/common.sh@117 -- # sync 00:18:50.046 18:05:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:50.046 18:05:38 -- nvmf/common.sh@120 -- # set +e 00:18:50.046 18:05:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:50.046 18:05:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:50.046 18:05:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:50.046 18:05:38 -- nvmf/common.sh@124 -- # set -e 00:18:50.046 18:05:38 -- nvmf/common.sh@125 -- # return 0 00:18:50.046 18:05:38 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:50.046 18:05:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:50.046 18:05:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:50.046 18:05:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:50.046 18:05:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:50.046 18:05:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:50.046 18:05:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.046 18:05:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.046 18:05:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.046 18:05:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:50.046 00:18:50.046 real 0m4.586s 00:18:50.046 user 0m0.847s 00:18:50.046 sys 0m1.745s 00:18:50.046 18:05:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:50.046 18:05:38 -- common/autotest_common.sh@10 -- # set +x 00:18:50.046 ************************************ 00:18:50.046 END TEST nvmf_multipath 00:18:50.046 ************************************ 00:18:50.046 18:05:38 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:50.046 18:05:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:50.046 18:05:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:50.046 18:05:38 -- common/autotest_common.sh@10 -- # set +x 00:18:50.305 ************************************ 00:18:50.305 START TEST nvmf_zcopy 00:18:50.305 ************************************ 00:18:50.305 18:05:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:50.305 * Looking for test storage... 00:18:50.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:50.305 18:05:39 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:50.305 18:05:39 -- nvmf/common.sh@7 -- # uname -s 00:18:50.305 18:05:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.305 18:05:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.305 18:05:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.305 18:05:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.305 18:05:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.305 18:05:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.305 18:05:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.305 18:05:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.305 18:05:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.305 18:05:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.305 18:05:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:50.305 18:05:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:50.305 18:05:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.305 18:05:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.305 18:05:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:50.305 18:05:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.305 18:05:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:50.305 18:05:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.305 18:05:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.305 18:05:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.305 18:05:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.305 18:05:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.305 18:05:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.305 18:05:39 -- paths/export.sh@5 -- # export PATH 00:18:50.305 18:05:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.305 18:05:39 -- nvmf/common.sh@47 -- # : 0 00:18:50.305 18:05:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:50.305 18:05:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:50.305 18:05:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.305 18:05:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.305 18:05:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.305 18:05:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:50.305 18:05:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:50.305 18:05:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:50.305 18:05:39 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:50.305 18:05:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:50.305 18:05:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.305 18:05:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:50.305 18:05:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:50.305 18:05:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:50.305 18:05:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.305 18:05:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.305 18:05:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.305 18:05:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:50.305 18:05:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:50.305 18:05:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:50.305 18:05:39 -- common/autotest_common.sh@10 -- # set +x 00:18:52.836 18:05:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:52.836 18:05:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:52.836 18:05:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:52.836 18:05:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:52.836 18:05:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:52.836 18:05:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:52.836 18:05:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:52.836 18:05:41 -- nvmf/common.sh@295 -- # net_devs=() 00:18:52.836 18:05:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:52.836 18:05:41 -- nvmf/common.sh@296 -- # e810=() 00:18:52.836 18:05:41 -- nvmf/common.sh@296 -- # local -ga e810 00:18:52.836 18:05:41 -- nvmf/common.sh@297 -- # x722=() 00:18:52.836 18:05:41 -- nvmf/common.sh@297 -- # local -ga x722 00:18:52.836 18:05:41 -- nvmf/common.sh@298 -- # mlx=() 00:18:52.836 18:05:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:52.836 18:05:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.836 18:05:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.836 18:05:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.836 18:05:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.836 18:05:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.836 18:05:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.836 18:05:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.836 18:05:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.836 18:05:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.836 18:05:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.836 18:05:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.836 18:05:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:52.836 18:05:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:52.836 18:05:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:52.836 18:05:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.836 18:05:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:52.836 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:52.836 18:05:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.836 18:05:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:52.836 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:52.836 18:05:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:52.836 18:05:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.836 18:05:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.836 18:05:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:52.836 18:05:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.836 18:05:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:52.836 Found net devices under 0000:84:00.0: cvl_0_0 00:18:52.836 18:05:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.836 18:05:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.836 18:05:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.836 18:05:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:52.836 18:05:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.836 18:05:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:52.836 Found net devices under 0000:84:00.1: cvl_0_1 00:18:52.836 18:05:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.836 18:05:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:52.836 18:05:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:52.836 18:05:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:52.836 18:05:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.836 18:05:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.836 18:05:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.836 18:05:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:52.836 18:05:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.836 18:05:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.836 18:05:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:52.836 18:05:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.836 18:05:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.836 18:05:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:52.836 18:05:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:52.836 18:05:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.836 18:05:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.836 18:05:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.836 18:05:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.836 18:05:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:52.836 18:05:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.836 18:05:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.836 18:05:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.836 18:05:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:52.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:18:52.836 00:18:52.836 --- 10.0.0.2 ping statistics --- 00:18:52.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.836 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:18:52.836 18:05:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:18:52.836 00:18:52.836 --- 10.0.0.1 ping statistics --- 00:18:52.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.836 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:18:52.836 18:05:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.836 18:05:41 -- nvmf/common.sh@411 -- # return 0 00:18:52.836 18:05:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:52.836 18:05:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.836 18:05:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:52.836 18:05:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.836 18:05:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:52.836 18:05:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:52.836 18:05:41 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:52.836 18:05:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:52.836 18:05:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:52.836 18:05:41 -- common/autotest_common.sh@10 -- # set +x 00:18:52.836 18:05:41 -- nvmf/common.sh@470 -- # nvmfpid=3326146 00:18:52.836 18:05:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:52.836 18:05:41 -- nvmf/common.sh@471 -- # waitforlisten 3326146 00:18:52.836 18:05:41 -- common/autotest_common.sh@817 -- # '[' -z 3326146 ']' 00:18:52.836 18:05:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.836 18:05:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:52.836 18:05:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.836 18:05:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:52.836 18:05:41 -- common/autotest_common.sh@10 -- # set +x 00:18:52.836 [2024-04-15 18:05:41.683272] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:18:52.836 [2024-04-15 18:05:41.683361] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.836 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.836 [2024-04-15 18:05:41.778657] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.095 [2024-04-15 18:05:41.887046] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.095 [2024-04-15 18:05:41.887137] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.095 [2024-04-15 18:05:41.887171] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.095 [2024-04-15 18:05:41.887200] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.095 [2024-04-15 18:05:41.887225] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.095 [2024-04-15 18:05:41.887272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.354 18:05:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:53.354 18:05:42 -- common/autotest_common.sh@850 -- # return 0 00:18:53.354 18:05:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:53.354 18:05:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:53.354 18:05:42 -- common/autotest_common.sh@10 -- # set +x 00:18:53.354 18:05:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.354 18:05:42 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:53.354 18:05:42 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:53.354 18:05:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.354 18:05:42 -- common/autotest_common.sh@10 -- # set +x 00:18:53.354 [2024-04-15 18:05:42.098239] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.354 18:05:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.354 18:05:42 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:53.354 18:05:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.354 18:05:42 -- common/autotest_common.sh@10 -- # set +x 00:18:53.354 18:05:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.354 18:05:42 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.354 18:05:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.354 18:05:42 -- common/autotest_common.sh@10 -- # set +x 00:18:53.354 [2024-04-15 18:05:42.114473] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.354 18:05:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.354 18:05:42 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:53.354 18:05:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.354 18:05:42 -- common/autotest_common.sh@10 -- # set +x 00:18:53.354 18:05:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.354 18:05:42 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:53.354 18:05:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.354 18:05:42 -- common/autotest_common.sh@10 -- # set +x 00:18:53.354 malloc0 00:18:53.354 18:05:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.354 18:05:42 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:53.354 18:05:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.354 18:05:42 -- common/autotest_common.sh@10 -- # set +x 00:18:53.354 18:05:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.354 18:05:42 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:53.354 18:05:42 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:53.354 18:05:42 -- nvmf/common.sh@521 -- # config=() 00:18:53.354 18:05:42 -- nvmf/common.sh@521 -- # local subsystem config 00:18:53.354 18:05:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:53.354 18:05:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:53.354 { 00:18:53.354 "params": { 00:18:53.354 "name": "Nvme$subsystem", 00:18:53.354 "trtype": "$TEST_TRANSPORT", 00:18:53.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:53.354 "adrfam": "ipv4", 00:18:53.354 "trsvcid": "$NVMF_PORT", 00:18:53.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:53.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:53.354 "hdgst": ${hdgst:-false}, 00:18:53.354 "ddgst": ${ddgst:-false} 00:18:53.354 }, 00:18:53.354 "method": "bdev_nvme_attach_controller" 00:18:53.354 } 00:18:53.354 EOF 00:18:53.354 )") 00:18:53.354 18:05:42 -- nvmf/common.sh@543 -- # cat 00:18:53.354 18:05:42 -- nvmf/common.sh@545 -- # jq . 00:18:53.354 18:05:42 -- nvmf/common.sh@546 -- # IFS=, 00:18:53.354 18:05:42 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:53.354 "params": { 00:18:53.354 "name": "Nvme1", 00:18:53.354 "trtype": "tcp", 00:18:53.354 "traddr": "10.0.0.2", 00:18:53.354 "adrfam": "ipv4", 00:18:53.355 "trsvcid": "4420", 00:18:53.355 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.355 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:53.355 "hdgst": false, 00:18:53.355 "ddgst": false 00:18:53.355 }, 00:18:53.355 "method": "bdev_nvme_attach_controller" 00:18:53.355 }' 00:18:53.355 [2024-04-15 18:05:42.192738] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:18:53.355 [2024-04-15 18:05:42.192835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326282 ] 00:18:53.355 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.355 [2024-04-15 18:05:42.263265] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.613 [2024-04-15 18:05:42.358115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.613 [2024-04-15 18:05:42.366954] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:18:53.872 Running I/O for 10 seconds... 00:19:03.842 00:19:03.842 Latency(us) 00:19:03.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.842 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:03.842 Verification LBA range: start 0x0 length 0x1000 00:19:03.842 Nvme1n1 : 10.02 5383.82 42.06 0.00 0.00 23711.61 3762.25 33981.63 00:19:03.842 =================================================================================================================== 00:19:03.842 Total : 5383.82 42.06 0.00 0.00 23711.61 3762.25 33981.63 00:19:04.101 18:05:52 -- target/zcopy.sh@39 -- # perfpid=3327472 00:19:04.101 18:05:52 -- target/zcopy.sh@41 -- # xtrace_disable 00:19:04.101 18:05:52 -- common/autotest_common.sh@10 -- # set +x 00:19:04.101 18:05:52 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:04.101 18:05:52 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:04.101 18:05:52 -- nvmf/common.sh@521 -- # config=() 00:19:04.101 18:05:52 -- nvmf/common.sh@521 -- # local subsystem config 00:19:04.101 18:05:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:04.101 18:05:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:04.101 { 00:19:04.101 "params": { 00:19:04.101 "name": "Nvme$subsystem", 00:19:04.101 "trtype": "$TEST_TRANSPORT", 00:19:04.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.101 "adrfam": "ipv4", 00:19:04.101 "trsvcid": "$NVMF_PORT", 00:19:04.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.101 "hdgst": ${hdgst:-false}, 00:19:04.101 "ddgst": ${ddgst:-false} 00:19:04.101 }, 00:19:04.101 "method": "bdev_nvme_attach_controller" 00:19:04.101 } 00:19:04.101 EOF 00:19:04.101 )") 00:19:04.101 [2024-04-15 18:05:52.878336] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:52.878381] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 18:05:52 -- nvmf/common.sh@543 -- # cat 00:19:04.101 18:05:52 -- nvmf/common.sh@545 -- # jq . 00:19:04.101 18:05:52 -- nvmf/common.sh@546 -- # IFS=, 00:19:04.101 [2024-04-15 18:05:52.886305] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:52.886341] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 18:05:52 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:04.101 "params": { 00:19:04.101 "name": "Nvme1", 00:19:04.101 "trtype": "tcp", 00:19:04.101 "traddr": "10.0.0.2", 00:19:04.101 "adrfam": "ipv4", 00:19:04.101 "trsvcid": "4420", 00:19:04.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.101 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.101 "hdgst": false, 00:19:04.101 "ddgst": false 00:19:04.101 }, 00:19:04.101 "method": "bdev_nvme_attach_controller" 00:19:04.101 }' 00:19:04.101 [2024-04-15 18:05:52.894333] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:52.894358] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:52.902346] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:52.902371] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:52.910373] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:52.910399] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:52.918398] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:52.918422] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:52.918947] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:19:04.101 [2024-04-15 18:05:52.919023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3327472 ] 00:19:04.101 [2024-04-15 18:05:52.926412] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:52.926437] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:52.934440] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:52.934465] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:52.942455] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:52.942480] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:52.950476] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:52.950500] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.101 [2024-04-15 18:05:52.958500] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:52.958524] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:52.966522] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:52.966548] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:52.974544] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:52.974568] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:52.982568] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:52.982592] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:52.988116] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.101 [2024-04-15 18:05:52.990590] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:52.990614] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:52.998643] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:52.998681] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:53.006646] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:53.006675] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:53.014658] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:53.014683] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:53.022676] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:53.022701] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:53.030701] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:53.030725] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:53.038727] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:53.038752] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:53.046774] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.101 [2024-04-15 18:05:53.046810] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.101 [2024-04-15 18:05:53.054782] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.360 [2024-04-15 18:05:53.054813] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.360 [2024-04-15 18:05:53.062793] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.360 [2024-04-15 18:05:53.062820] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.360 [2024-04-15 18:05:53.070811] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.360 [2024-04-15 18:05:53.070836] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.360 [2024-04-15 18:05:53.078834] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.360 [2024-04-15 18:05:53.078859] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.360 [2024-04-15 18:05:53.083650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.360 [2024-04-15 18:05:53.086854] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.360 [2024-04-15 18:05:53.086880] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.092446] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:19:04.361 [2024-04-15 18:05:53.094875] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.094906] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.102917] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.102954] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.110939] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.110977] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.118965] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.119002] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.126986] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.127024] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.135010] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.135048] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.143034] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.143080] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.151047] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.151088] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.159054] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.159087] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.167105] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.167139] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.175128] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.175164] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.183133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.183159] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.191148] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.191174] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.199180] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.199205] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.207207] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.207237] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.215227] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.215255] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.223249] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.223277] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.231268] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.231295] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.239292] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.239317] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.247316] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.247354] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.255340] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.255364] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.263359] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.263384] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.271390] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.271418] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.279410] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.279438] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.287436] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.287464] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.295450] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.295476] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.303481] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.303511] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 [2024-04-15 18:05:53.311502] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.361 [2024-04-15 18:05:53.311536] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.361 Running I/O for 5 seconds... 00:19:04.620 [2024-04-15 18:05:53.319527] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.620 [2024-04-15 18:05:53.319557] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.620 [2024-04-15 18:05:53.334659] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.620 [2024-04-15 18:05:53.334691] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.620 [2024-04-15 18:05:53.346993] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.620 [2024-04-15 18:05:53.347025] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.620 [2024-04-15 18:05:53.358514] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.620 [2024-04-15 18:05:53.358544] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.620 [2024-04-15 18:05:53.370357] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.620 [2024-04-15 18:05:53.370387] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.620 [2024-04-15 18:05:53.382548] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.620 [2024-04-15 18:05:53.382578] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.620 [2024-04-15 18:05:53.395236] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.620 [2024-04-15 18:05:53.395267] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.620 [2024-04-15 18:05:53.407082] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.620 [2024-04-15 18:05:53.407113] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.620 [2024-04-15 18:05:53.418940] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.620 [2024-04-15 18:05:53.418970] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.620 [2024-04-15 18:05:53.431034] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.620 [2024-04-15 18:05:53.431075] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.620 [2024-04-15 18:05:53.442906] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.620 [2024-04-15 18:05:53.442937] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.620 [2024-04-15 18:05:53.454667] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.620 [2024-04-15 18:05:53.454697] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.620 [2024-04-15 18:05:53.467011] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.620 [2024-04-15 18:05:53.467042] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.620 [2024-04-15 18:05:53.478363] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.621 [2024-04-15 18:05:53.478394] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.621 [2024-04-15 18:05:53.489874] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.621 [2024-04-15 18:05:53.489904] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.621 [2024-04-15 18:05:53.501793] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.621 [2024-04-15 18:05:53.501823] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.621 [2024-04-15 18:05:53.513310] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.621 [2024-04-15 18:05:53.513341] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.621 [2024-04-15 18:05:53.525424] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.621 [2024-04-15 18:05:53.525456] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.621 [2024-04-15 18:05:53.537648] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.621 [2024-04-15 18:05:53.537678] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.621 [2024-04-15 18:05:53.549629] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.621 [2024-04-15 18:05:53.549659] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.621 [2024-04-15 18:05:53.561259] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.621 [2024-04-15 18:05:53.561289] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.621 [2024-04-15 18:05:53.573378] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.621 [2024-04-15 18:05:53.573409] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.585006] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.585038] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.596654] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.596685] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.608509] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.608539] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.620167] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.620197] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.631699] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.631730] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.642919] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.642950] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.654474] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.654505] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.666242] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.666273] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.677973] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.678003] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.690227] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.690257] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.701868] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.701899] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.713873] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.713903] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.726407] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.726437] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.738580] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.738610] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.750656] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.750687] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.762805] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.762836] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.774520] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.774551] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.786947] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.786977] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.798852] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.798882] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.810645] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.810676] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.879 [2024-04-15 18:05:53.822450] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.879 [2024-04-15 18:05:53.822481] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.138 [2024-04-15 18:05:53.834538] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.138 [2024-04-15 18:05:53.834570] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.138 [2024-04-15 18:05:53.846868] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.138 [2024-04-15 18:05:53.846899] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.138 [2024-04-15 18:05:53.859015] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.138 [2024-04-15 18:05:53.859046] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.138 [2024-04-15 18:05:53.871267] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.138 [2024-04-15 18:05:53.871299] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.138 [2024-04-15 18:05:53.883388] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.138 [2024-04-15 18:05:53.883426] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.138 [2024-04-15 18:05:53.895527] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.138 [2024-04-15 18:05:53.895557] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.138 [2024-04-15 18:05:53.907485] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.138 [2024-04-15 18:05:53.907515] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.138 [2024-04-15 18:05:53.919177] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.138 [2024-04-15 18:05:53.919207] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.138 [2024-04-15 18:05:53.931275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.138 [2024-04-15 18:05:53.931306] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.138 [2024-04-15 18:05:53.943365] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.138 [2024-04-15 18:05:53.943396] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.138 [2024-04-15 18:05:53.955152] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.138 [2024-04-15 18:05:53.955182] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.138 [2024-04-15 18:05:53.967284] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.138 [2024-04-15 18:05:53.967314] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.138 [2024-04-15 18:05:53.979253] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.138 [2024-04-15 18:05:53.979284] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.138 [2024-04-15 18:05:53.991444] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.138 [2024-04-15 18:05:53.991474] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.138 [2024-04-15 18:05:54.003572] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.138 [2024-04-15 18:05:54.003603] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.138 [2024-04-15 18:05:54.015479] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.138 [2024-04-15 18:05:54.015509] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.138 [2024-04-15 18:05:54.027674] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.138 [2024-04-15 18:05:54.027704] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.138 [2024-04-15 18:05:54.039657] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.139 [2024-04-15 18:05:54.039687] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.139 [2024-04-15 18:05:54.051448] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.139 [2024-04-15 18:05:54.051478] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.139 [2024-04-15 18:05:54.063213] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.139 [2024-04-15 18:05:54.063243] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.139 [2024-04-15 18:05:54.075296] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.139 [2024-04-15 18:05:54.075337] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.139 [2024-04-15 18:05:54.087093] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.139 [2024-04-15 18:05:54.087124] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.397 [2024-04-15 18:05:54.099590] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.397 [2024-04-15 18:05:54.099632] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.397 [2024-04-15 18:05:54.111272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.397 [2024-04-15 18:05:54.111311] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.397 [2024-04-15 18:05:54.122861] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.122891] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.134383] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.134414] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.146119] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.146150] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.158252] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.158283] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.170268] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.170299] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.182399] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.182430] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.194296] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.194326] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.206185] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.206216] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.217826] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.217856] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.229689] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.229720] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.242088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.242118] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.254745] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.254779] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.267276] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.267306] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.279448] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.279479] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.291550] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.291581] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.303707] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.303738] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.315555] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.315590] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.327662] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.327692] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.339122] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.339160] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.398 [2024-04-15 18:05:54.351221] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.398 [2024-04-15 18:05:54.351254] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.656 [2024-04-15 18:05:54.363420] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.656 [2024-04-15 18:05:54.363453] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.656 [2024-04-15 18:05:54.375210] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.656 [2024-04-15 18:05:54.375242] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.656 [2024-04-15 18:05:54.387371] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.656 [2024-04-15 18:05:54.387403] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.656 [2024-04-15 18:05:54.398955] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.656 [2024-04-15 18:05:54.398986] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.656 [2024-04-15 18:05:54.410694] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.656 [2024-04-15 18:05:54.410725] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.656 [2024-04-15 18:05:54.422584] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.656 [2024-04-15 18:05:54.422617] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.656 [2024-04-15 18:05:54.434616] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.656 [2024-04-15 18:05:54.434647] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.656 [2024-04-15 18:05:54.446247] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.656 [2024-04-15 18:05:54.446278] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.656 [2024-04-15 18:05:54.457878] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.656 [2024-04-15 18:05:54.457909] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.656 [2024-04-15 18:05:54.469639] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.656 [2024-04-15 18:05:54.469677] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.656 [2024-04-15 18:05:54.482019] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.656 [2024-04-15 18:05:54.482051] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.656 [2024-04-15 18:05:54.494056] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.657 [2024-04-15 18:05:54.494098] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.657 [2024-04-15 18:05:54.505738] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.657 [2024-04-15 18:05:54.505768] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.657 [2024-04-15 18:05:54.517613] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.657 [2024-04-15 18:05:54.517642] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.657 [2024-04-15 18:05:54.529408] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.657 [2024-04-15 18:05:54.529438] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.657 [2024-04-15 18:05:54.541328] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.657 [2024-04-15 18:05:54.541359] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.657 [2024-04-15 18:05:54.553391] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.657 [2024-04-15 18:05:54.553421] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.657 [2024-04-15 18:05:54.565189] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.657 [2024-04-15 18:05:54.565238] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.657 [2024-04-15 18:05:54.577231] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.657 [2024-04-15 18:05:54.577261] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.657 [2024-04-15 18:05:54.589568] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.657 [2024-04-15 18:05:54.589598] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.657 [2024-04-15 18:05:54.601611] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.657 [2024-04-15 18:05:54.601641] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.613864] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.613896] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.626391] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.626421] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.638657] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.638687] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.650658] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.650688] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.662520] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.662550] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.674692] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.674723] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.686626] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.686656] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.698840] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.698871] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.711119] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.711150] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.723299] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.723329] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.734904] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.734934] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.746986] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.747016] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.758738] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.758768] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.771013] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.771043] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.782871] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.782901] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.794827] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.794865] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.807110] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.807140] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.819289] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.819320] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.831174] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.831204] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.843398] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.843429] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.855345] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.855375] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.915 [2024-04-15 18:05:54.867068] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.915 [2024-04-15 18:05:54.867099] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:54.878789] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:54.878832] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:54.890716] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:54.890758] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:54.902425] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:54.902456] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:54.914440] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:54.914471] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:54.926217] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:54.926248] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:54.937727] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:54.937758] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:54.949364] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:54.949394] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:54.961213] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:54.961244] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:54.973202] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:54.973233] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:54.985071] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:54.985102] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:54.996787] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:54.996829] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:55.008591] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:55.008621] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:55.020949] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:55.020979] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:55.033336] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:55.033367] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:55.045517] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:55.045548] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:55.057692] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:55.057723] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:55.069665] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:55.069696] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:55.081747] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:55.081779] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:55.093474] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:55.093504] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:55.105944] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:55.105980] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.174 [2024-04-15 18:05:55.118325] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.174 [2024-04-15 18:05:55.118356] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.130567] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.130599] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.142738] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.142777] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.154961] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.154992] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.167019] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.167050] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.178762] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.178800] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.190643] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.190674] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.202560] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.202592] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.215026] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.215057] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.226696] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.226727] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.238163] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.238204] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.250013] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.250043] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.261894] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.261925] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.273749] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.273779] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.286094] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.286124] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.298205] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.298237] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.310151] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.310182] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.322214] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.322245] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.334565] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.334597] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.346560] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.346591] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.358739] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.358771] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.370872] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.370903] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.432 [2024-04-15 18:05:55.383268] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.432 [2024-04-15 18:05:55.383298] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.395471] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.395502] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.407419] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.407449] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.419956] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.419986] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.431658] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.431697] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.443727] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.443757] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.455802] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.455833] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.467738] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.467769] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.479353] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.479383] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.491556] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.491597] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.503425] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.503457] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.515261] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.515293] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.527671] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.527702] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.539542] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.539572] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.552079] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.552117] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.564345] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.564375] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.576184] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.576214] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.588539] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.588570] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.601333] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.601363] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.613934] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.613965] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.625776] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.625807] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.690 [2024-04-15 18:05:55.638020] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.690 [2024-04-15 18:05:55.638051] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.650539] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.650571] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.662772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.662803] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.674856] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.674887] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.686894] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.686926] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.699331] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.699370] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.711886] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.711916] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.723901] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.723931] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.735900] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.735930] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.747865] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.747895] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.759577] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.759607] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.771475] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.771512] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.783238] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.783269] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.794879] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.794909] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.807070] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.807109] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.819042] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.819090] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.831298] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.831328] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.843136] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.843167] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.855290] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.855320] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.867675] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.867705] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.880054] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.880092] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.948 [2024-04-15 18:05:55.892174] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.948 [2024-04-15 18:05:55.892205] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:55.904659] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:55.904690] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:55.916592] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:55.916624] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:55.928641] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:55.928694] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:55.940623] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:55.940654] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:55.952617] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:55.952647] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:55.964373] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:55.964403] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:55.976459] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:55.976490] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:55.988688] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:55.988718] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:56.000706] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:56.000737] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:56.012767] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:56.012805] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:56.025161] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:56.025192] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:56.036806] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:56.036837] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:56.048826] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:56.048856] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:56.060913] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:56.060943] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:56.072916] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:56.072947] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:56.084313] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:56.084344] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:56.096552] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:56.096583] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:56.108455] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:56.108486] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:56.120367] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:56.120397] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:56.132230] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:56.132261] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:56.143962] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:56.143993] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.207 [2024-04-15 18:05:56.155768] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.207 [2024-04-15 18:05:56.155807] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.466 [2024-04-15 18:05:56.167955] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.167987] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.180363] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.180393] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.192889] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.192920] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.205285] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.205315] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.217340] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.217379] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.229938] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.229968] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.241822] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.241857] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.253584] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.253614] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.265281] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.265311] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.277242] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.277272] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.289036] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.289085] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.300761] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.300795] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.312513] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.312543] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.324195] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.324225] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.335790] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.335821] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.347375] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.347405] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.358784] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.358814] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.370655] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.370686] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.382837] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.382876] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.394682] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.394712] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.406745] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.406776] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-04-15 18:05:56.418870] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-04-15 18:05:56.418901] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.430748] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-04-15 18:05:56.430785] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.442275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-04-15 18:05:56.442306] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.453919] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-04-15 18:05:56.453949] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.466031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-04-15 18:05:56.466072] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.478216] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-04-15 18:05:56.478246] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.490004] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-04-15 18:05:56.490034] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.502080] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-04-15 18:05:56.502110] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.514295] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-04-15 18:05:56.514325] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.526089] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-04-15 18:05:56.526119] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.538578] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-04-15 18:05:56.538608] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.551051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-04-15 18:05:56.551092] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.563008] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-04-15 18:05:56.563039] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.575366] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-04-15 18:05:56.575396] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.587664] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-04-15 18:05:56.587695] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.599741] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-04-15 18:05:56.599772] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.611251] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-04-15 18:05:56.611289] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.622806] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-04-15 18:05:56.622836] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.634659] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-04-15 18:05:56.634690] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-04-15 18:05:56.646340] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.726 [2024-04-15 18:05:56.646371] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.726 [2024-04-15 18:05:56.657968] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.726 [2024-04-15 18:05:56.657999] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.726 [2024-04-15 18:05:56.671254] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.726 [2024-04-15 18:05:56.671285] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.682622] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.682662] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.694328] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.694360] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.706238] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.706269] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.718092] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.718124] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.730054] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.730107] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.741542] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.741572] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.753351] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.753382] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.764947] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.764979] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.776642] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.776673] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.790584] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.790614] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.801723] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.801754] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.813446] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.813477] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.825444] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.825475] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.837340] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.837372] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.848957] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.848988] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.860515] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.860546] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.872583] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.872614] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.884386] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.884416] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.896307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.896337] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.908356] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.908388] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.920346] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.920376] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.987 [2024-04-15 18:05:56.932107] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.987 [2024-04-15 18:05:56.932138] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:56.945023] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:56.945055] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:56.957349] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:56.957381] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:56.968869] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:56.968900] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:56.980712] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:56.980743] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:56.992686] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:56.992716] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:57.004729] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:57.004760] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:57.016889] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:57.016920] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:57.028842] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:57.028879] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:57.040561] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:57.040592] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:57.052109] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:57.052140] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:57.063539] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:57.063569] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:57.075624] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:57.075655] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:57.087381] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:57.087412] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:57.099193] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:57.099223] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:57.111401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:57.111432] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:57.123453] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:57.123484] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:57.135553] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:57.135584] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:57.147068] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:57.147098] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:57.159333] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:57.159363] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:57.171088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:57.171118] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:57.183134] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.277 [2024-04-15 18:05:57.183163] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.277 [2024-04-15 18:05:57.194974] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.278 [2024-04-15 18:05:57.195003] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.278 [2024-04-15 18:05:57.207264] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.278 [2024-04-15 18:05:57.207296] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.219204] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.219236] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.231014] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.231044] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.242968] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.242998] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.255164] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.255194] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.267790] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.267820] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.280151] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.280181] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.292141] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.292172] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.303778] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.303808] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.315466] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.315496] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.327749] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.327783] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.340079] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.340109] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.352851] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.352882] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.364671] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.364703] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.376216] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.376247] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.388092] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.388122] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.399646] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.399676] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.411748] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.411778] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.423504] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.423534] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.435532] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.435563] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.447501] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.447541] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.540 [2024-04-15 18:05:57.459742] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.540 [2024-04-15 18:05:57.459773] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.541 [2024-04-15 18:05:57.472099] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.541 [2024-04-15 18:05:57.472130] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.541 [2024-04-15 18:05:57.484071] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.541 [2024-04-15 18:05:57.484102] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.800 [2024-04-15 18:05:57.496435] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.800 [2024-04-15 18:05:57.496469] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.800 [2024-04-15 18:05:57.508079] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.800 [2024-04-15 18:05:57.508127] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.800 [2024-04-15 18:05:57.520296] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.800 [2024-04-15 18:05:57.520327] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.800 [2024-04-15 18:05:57.532585] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.800 [2024-04-15 18:05:57.532616] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.801 [2024-04-15 18:05:57.544590] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.801 [2024-04-15 18:05:57.544620] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.801 [2024-04-15 18:05:57.556631] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.801 [2024-04-15 18:05:57.556662] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.801 [2024-04-15 18:05:57.569383] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.801 [2024-04-15 18:05:57.569415] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.801 [2024-04-15 18:05:57.581581] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.801 [2024-04-15 18:05:57.581613] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.801 [2024-04-15 18:05:57.593853] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.801 [2024-04-15 18:05:57.593884] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.801 [2024-04-15 18:05:57.606314] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.801 [2024-04-15 18:05:57.606345] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.801 [2024-04-15 18:05:57.618776] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.801 [2024-04-15 18:05:57.618807] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.801 [2024-04-15 18:05:57.631104] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.801 [2024-04-15 18:05:57.631141] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.801 [2024-04-15 18:05:57.643211] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.801 [2024-04-15 18:05:57.643241] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.801 [2024-04-15 18:05:57.655144] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.801 [2024-04-15 18:05:57.655175] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.801 [2024-04-15 18:05:57.666807] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.801 [2024-04-15 18:05:57.666846] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.801 [2024-04-15 18:05:57.678192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.801 [2024-04-15 18:05:57.678223] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.801 [2024-04-15 18:05:57.690312] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.801 [2024-04-15 18:05:57.690342] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.801 [2024-04-15 18:05:57.701941] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.801 [2024-04-15 18:05:57.701972] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.801 [2024-04-15 18:05:57.713608] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.801 [2024-04-15 18:05:57.713639] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.801 [2024-04-15 18:05:57.725852] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.801 [2024-04-15 18:05:57.725882] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.801 [2024-04-15 18:05:57.737735] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.801 [2024-04-15 18:05:57.737773] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.801 [2024-04-15 18:05:57.749745] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.801 [2024-04-15 18:05:57.749775] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.761933] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.761963] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.773839] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.773869] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.785903] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.785934] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.797839] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.797871] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.809638] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.809669] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.821618] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.821649] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.833140] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.833170] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.844955] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.844986] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.856834] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.856873] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.868651] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.868681] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.880289] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.880321] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.892193] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.892223] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.904318] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.904349] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.916128] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.916159] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.927636] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.927668] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.939544] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.939574] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.951789] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.951820] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.963746] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.963784] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.975524] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.975555] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:57.987694] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:57.987724] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:58.000389] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:58.000419] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.060 [2024-04-15 18:05:58.012571] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.060 [2024-04-15 18:05:58.012602] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.024645] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.024677] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.036873] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.036904] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.048959] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.048989] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.060701] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.060732] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.072251] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.072282] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.084306] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.084336] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.096176] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.096206] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.112210] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.112253] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.123457] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.123487] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.135453] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.135483] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.147484] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.147515] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.159702] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.159733] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.171433] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.171463] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.183301] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.183331] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.195152] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.195192] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.206968] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.206999] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.219231] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.219263] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.230910] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.230940] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.242581] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.242611] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.254885] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.254915] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.320 [2024-04-15 18:05:58.266827] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.320 [2024-04-15 18:05:58.266858] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.580 [2024-04-15 18:05:58.279012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.580 [2024-04-15 18:05:58.279043] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.580 [2024-04-15 18:05:58.291250] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.580 [2024-04-15 18:05:58.291280] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.580 [2024-04-15 18:05:58.303427] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.580 [2024-04-15 18:05:58.303458] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.580 [2024-04-15 18:05:58.315361] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.580 [2024-04-15 18:05:58.315393] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.580 [2024-04-15 18:05:58.327353] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.580 [2024-04-15 18:05:58.327383] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.580 [2024-04-15 18:05:58.336693] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.580 [2024-04-15 18:05:58.336722] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.580 00:19:09.580 Latency(us) 00:19:09.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.580 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:09.580 Nvme1n1 : 5.01 10597.30 82.79 0.00 0.00 12061.42 5679.79 27767.85 00:19:09.580 =================================================================================================================== 00:19:09.580 Total : 10597.30 82.79 0.00 0.00 12061.42 5679.79 27767.85 00:19:09.580 [2024-04-15 18:05:58.343250] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.580 [2024-04-15 18:05:58.343278] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.580 [2024-04-15 18:05:58.351266] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.580 [2024-04-15 18:05:58.351295] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.580 [2024-04-15 18:05:58.359306] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.580 [2024-04-15 18:05:58.359345] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.580 [2024-04-15 18:05:58.367357] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.580 [2024-04-15 18:05:58.367408] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.580 [2024-04-15 18:05:58.375371] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.580 [2024-04-15 18:05:58.375419] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.580 [2024-04-15 18:05:58.383390] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.580 [2024-04-15 18:05:58.383437] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.580 [2024-04-15 18:05:58.391412] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.580 [2024-04-15 18:05:58.391459] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.580 [2024-04-15 18:05:58.399440] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.580 [2024-04-15 18:05:58.399489] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.581 [2024-04-15 18:05:58.407469] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.581 [2024-04-15 18:05:58.407519] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.581 [2024-04-15 18:05:58.415485] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.581 [2024-04-15 18:05:58.415534] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.581 [2024-04-15 18:05:58.423506] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.581 [2024-04-15 18:05:58.423553] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.581 [2024-04-15 18:05:58.431533] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.581 [2024-04-15 18:05:58.431582] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.581 [2024-04-15 18:05:58.439562] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.581 [2024-04-15 18:05:58.439614] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.581 [2024-04-15 18:05:58.447595] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.581 [2024-04-15 18:05:58.447639] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.581 [2024-04-15 18:05:58.455603] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.581 [2024-04-15 18:05:58.455651] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.581 [2024-04-15 18:05:58.463621] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.581 [2024-04-15 18:05:58.463669] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.581 [2024-04-15 18:05:58.471644] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.581 [2024-04-15 18:05:58.471692] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.581 [2024-04-15 18:05:58.479670] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.581 [2024-04-15 18:05:58.479717] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.581 [2024-04-15 18:05:58.487660] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.581 [2024-04-15 18:05:58.487690] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.581 [2024-04-15 18:05:58.495678] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.581 [2024-04-15 18:05:58.495705] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.581 [2024-04-15 18:05:58.503735] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.581 [2024-04-15 18:05:58.503783] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.581 [2024-04-15 18:05:58.511759] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.581 [2024-04-15 18:05:58.511806] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.581 [2024-04-15 18:05:58.519774] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.581 [2024-04-15 18:05:58.519817] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.581 [2024-04-15 18:05:58.527781] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.581 [2024-04-15 18:05:58.527820] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.842 [2024-04-15 18:05:58.535802] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.842 [2024-04-15 18:05:58.535833] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.842 [2024-04-15 18:05:58.543859] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.842 [2024-04-15 18:05:58.543912] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.842 [2024-04-15 18:05:58.551869] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.842 [2024-04-15 18:05:58.551916] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.842 [2024-04-15 18:05:58.559852] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.842 [2024-04-15 18:05:58.559877] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.842 [2024-04-15 18:05:58.567879] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.842 [2024-04-15 18:05:58.567906] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.842 [2024-04-15 18:05:58.575898] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.842 [2024-04-15 18:05:58.575931] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3327472) - No such process 00:19:09.842 18:05:58 -- target/zcopy.sh@49 -- # wait 3327472 00:19:09.842 18:05:58 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:09.842 18:05:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.842 18:05:58 -- common/autotest_common.sh@10 -- # set +x 00:19:09.842 18:05:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.842 18:05:58 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:09.842 18:05:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.842 18:05:58 -- common/autotest_common.sh@10 -- # set +x 00:19:09.842 delay0 00:19:09.842 18:05:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.842 18:05:58 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:09.842 18:05:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.842 18:05:58 -- common/autotest_common.sh@10 -- # set +x 00:19:09.842 18:05:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.842 18:05:58 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:09.842 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.842 [2024-04-15 18:05:58.695355] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:17.958 Initializing NVMe Controllers 00:19:17.958 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:17.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:17.958 Initialization complete. Launching workers. 00:19:17.958 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 244, failed: 19046 00:19:17.958 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 19185, failed to submit 105 00:19:17.958 success 19085, unsuccess 100, failed 0 00:19:17.958 18:06:05 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:17.958 18:06:05 -- target/zcopy.sh@60 -- # nvmftestfini 00:19:17.958 18:06:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:17.958 18:06:05 -- nvmf/common.sh@117 -- # sync 00:19:17.958 18:06:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:17.958 18:06:05 -- nvmf/common.sh@120 -- # set +e 00:19:17.958 18:06:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:17.958 18:06:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:17.958 rmmod nvme_tcp 00:19:17.958 rmmod nvme_fabrics 00:19:17.958 rmmod nvme_keyring 00:19:17.958 18:06:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:17.958 18:06:05 -- nvmf/common.sh@124 -- # set -e 00:19:17.958 18:06:05 -- nvmf/common.sh@125 -- # return 0 00:19:17.958 18:06:05 -- nvmf/common.sh@478 -- # '[' -n 3326146 ']' 00:19:17.958 18:06:05 -- nvmf/common.sh@479 -- # killprocess 3326146 00:19:17.958 18:06:05 -- common/autotest_common.sh@936 -- # '[' -z 3326146 ']' 00:19:17.958 18:06:05 -- common/autotest_common.sh@940 -- # kill -0 3326146 00:19:17.958 18:06:05 -- common/autotest_common.sh@941 -- # uname 00:19:17.958 18:06:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:17.958 18:06:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3326146 00:19:17.958 18:06:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:17.958 18:06:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:17.958 18:06:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3326146' 00:19:17.958 killing process with pid 3326146 00:19:17.958 18:06:05 -- common/autotest_common.sh@955 -- # kill 3326146 00:19:17.958 18:06:05 -- common/autotest_common.sh@960 -- # wait 3326146 00:19:17.958 18:06:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:17.958 18:06:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:17.958 18:06:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:17.958 18:06:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:17.958 18:06:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:17.958 18:06:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.958 18:06:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.958 18:06:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.338 18:06:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:19.338 00:19:19.338 real 0m29.071s 00:19:19.338 user 0m40.681s 00:19:19.338 sys 0m10.857s 00:19:19.338 18:06:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:19.338 18:06:08 -- common/autotest_common.sh@10 -- # set +x 00:19:19.338 ************************************ 00:19:19.338 END TEST nvmf_zcopy 00:19:19.338 ************************************ 00:19:19.338 18:06:08 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:19.338 18:06:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:19.338 18:06:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:19.338 18:06:08 -- common/autotest_common.sh@10 -- # set +x 00:19:19.338 ************************************ 00:19:19.338 START TEST nvmf_nmic 00:19:19.338 ************************************ 00:19:19.338 18:06:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:19.597 * Looking for test storage... 00:19:19.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:19.597 18:06:08 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:19.597 18:06:08 -- nvmf/common.sh@7 -- # uname -s 00:19:19.597 18:06:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.597 18:06:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.597 18:06:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.597 18:06:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.597 18:06:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.597 18:06:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.597 18:06:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.597 18:06:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.597 18:06:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.597 18:06:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.597 18:06:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:19.597 18:06:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:19.597 18:06:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.597 18:06:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.597 18:06:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:19.597 18:06:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.597 18:06:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:19.597 18:06:08 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.597 18:06:08 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.597 18:06:08 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.597 18:06:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.597 18:06:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.597 18:06:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.597 18:06:08 -- paths/export.sh@5 -- # export PATH 00:19:19.597 18:06:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.597 18:06:08 -- nvmf/common.sh@47 -- # : 0 00:19:19.597 18:06:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:19.597 18:06:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:19.597 18:06:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.597 18:06:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.597 18:06:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.597 18:06:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:19.597 18:06:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:19.597 18:06:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:19.597 18:06:08 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:19.597 18:06:08 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:19.597 18:06:08 -- target/nmic.sh@14 -- # nvmftestinit 00:19:19.597 18:06:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:19.597 18:06:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.597 18:06:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:19.597 18:06:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:19.597 18:06:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:19.598 18:06:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.598 18:06:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.598 18:06:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.598 18:06:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:19.598 18:06:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:19.598 18:06:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:19.598 18:06:08 -- common/autotest_common.sh@10 -- # set +x 00:19:22.134 18:06:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:22.134 18:06:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:22.134 18:06:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:22.134 18:06:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:22.134 18:06:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:22.134 18:06:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:22.134 18:06:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:22.134 18:06:10 -- nvmf/common.sh@295 -- # net_devs=() 00:19:22.134 18:06:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:22.134 18:06:10 -- nvmf/common.sh@296 -- # e810=() 00:19:22.134 18:06:10 -- nvmf/common.sh@296 -- # local -ga e810 00:19:22.134 18:06:10 -- nvmf/common.sh@297 -- # x722=() 00:19:22.134 18:06:10 -- nvmf/common.sh@297 -- # local -ga x722 00:19:22.134 18:06:10 -- nvmf/common.sh@298 -- # mlx=() 00:19:22.134 18:06:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:22.134 18:06:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.134 18:06:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.134 18:06:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.134 18:06:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.134 18:06:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.134 18:06:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.134 18:06:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.134 18:06:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.134 18:06:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.134 18:06:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.134 18:06:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.134 18:06:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:22.134 18:06:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:22.134 18:06:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:22.134 18:06:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.134 18:06:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:22.134 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:22.134 18:06:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.134 18:06:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:22.134 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:22.134 18:06:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:22.134 18:06:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.134 18:06:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.134 18:06:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:22.134 18:06:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.134 18:06:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:22.134 Found net devices under 0000:84:00.0: cvl_0_0 00:19:22.134 18:06:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.134 18:06:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.134 18:06:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.134 18:06:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:22.134 18:06:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.134 18:06:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:22.134 Found net devices under 0000:84:00.1: cvl_0_1 00:19:22.134 18:06:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.134 18:06:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:22.134 18:06:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:22.134 18:06:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:22.134 18:06:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.134 18:06:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.134 18:06:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:22.134 18:06:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:22.134 18:06:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:22.134 18:06:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:22.134 18:06:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:22.134 18:06:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:22.134 18:06:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.134 18:06:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:22.134 18:06:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:22.134 18:06:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:22.134 18:06:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:22.134 18:06:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:22.134 18:06:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:22.134 18:06:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:22.134 18:06:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:22.134 18:06:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:22.134 18:06:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:22.134 18:06:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:22.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:19:22.134 00:19:22.134 --- 10.0.0.2 ping statistics --- 00:19:22.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.134 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:19:22.134 18:06:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:22.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:19:22.134 00:19:22.134 --- 10.0.0.1 ping statistics --- 00:19:22.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.134 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:19:22.134 18:06:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.134 18:06:10 -- nvmf/common.sh@411 -- # return 0 00:19:22.134 18:06:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:22.134 18:06:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.134 18:06:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:22.134 18:06:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.134 18:06:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:22.134 18:06:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:22.134 18:06:10 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:22.134 18:06:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:22.134 18:06:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:22.134 18:06:10 -- common/autotest_common.sh@10 -- # set +x 00:19:22.134 18:06:10 -- nvmf/common.sh@470 -- # nvmfpid=3331617 00:19:22.134 18:06:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:22.134 18:06:10 -- nvmf/common.sh@471 -- # waitforlisten 3331617 00:19:22.134 18:06:10 -- common/autotest_common.sh@817 -- # '[' -z 3331617 ']' 00:19:22.134 18:06:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.134 18:06:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:22.134 18:06:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.134 18:06:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:22.134 18:06:10 -- common/autotest_common.sh@10 -- # set +x 00:19:22.134 [2024-04-15 18:06:10.877340] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:19:22.134 [2024-04-15 18:06:10.877432] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.135 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.135 [2024-04-15 18:06:10.957563] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:22.135 [2024-04-15 18:06:11.055148] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.135 [2024-04-15 18:06:11.055213] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.135 [2024-04-15 18:06:11.055230] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.135 [2024-04-15 18:06:11.055244] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.135 [2024-04-15 18:06:11.055258] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.135 [2024-04-15 18:06:11.057084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.135 [2024-04-15 18:06:11.057127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.135 [2024-04-15 18:06:11.057179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:22.135 [2024-04-15 18:06:11.057182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.395 18:06:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:22.395 18:06:11 -- common/autotest_common.sh@850 -- # return 0 00:19:22.395 18:06:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:22.395 18:06:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:22.395 18:06:11 -- common/autotest_common.sh@10 -- # set +x 00:19:22.395 18:06:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.395 18:06:11 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:22.395 18:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.395 18:06:11 -- common/autotest_common.sh@10 -- # set +x 00:19:22.395 [2024-04-15 18:06:11.217075] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.395 18:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.395 18:06:11 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:22.395 18:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.395 18:06:11 -- common/autotest_common.sh@10 -- # set +x 00:19:22.395 Malloc0 00:19:22.395 18:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.395 18:06:11 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:22.395 18:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.395 18:06:11 -- common/autotest_common.sh@10 -- # set +x 00:19:22.395 18:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.395 18:06:11 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:22.395 18:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.395 18:06:11 -- common/autotest_common.sh@10 -- # set +x 00:19:22.395 18:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.395 18:06:11 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.395 18:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.395 18:06:11 -- common/autotest_common.sh@10 -- # set +x 00:19:22.395 [2024-04-15 18:06:11.271713] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.395 18:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.395 18:06:11 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:22.395 test case1: single bdev can't be used in multiple subsystems 00:19:22.395 18:06:11 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:22.395 18:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.395 18:06:11 -- common/autotest_common.sh@10 -- # set +x 00:19:22.395 18:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.395 18:06:11 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:22.395 18:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.395 18:06:11 -- common/autotest_common.sh@10 -- # set +x 00:19:22.395 18:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.395 18:06:11 -- target/nmic.sh@28 -- # nmic_status=0 00:19:22.395 18:06:11 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:22.395 18:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.395 18:06:11 -- common/autotest_common.sh@10 -- # set +x 00:19:22.395 [2024-04-15 18:06:11.295547] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:22.395 [2024-04-15 18:06:11.295581] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:22.395 [2024-04-15 18:06:11.295600] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.395 request: 00:19:22.395 { 00:19:22.395 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:22.395 "namespace": { 00:19:22.395 "bdev_name": "Malloc0", 00:19:22.395 "no_auto_visible": false 00:19:22.395 }, 00:19:22.395 "method": "nvmf_subsystem_add_ns", 00:19:22.395 "req_id": 1 00:19:22.395 } 00:19:22.395 Got JSON-RPC error response 00:19:22.395 response: 00:19:22.395 { 00:19:22.395 "code": -32602, 00:19:22.395 "message": "Invalid parameters" 00:19:22.395 } 00:19:22.395 18:06:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:22.395 18:06:11 -- target/nmic.sh@29 -- # nmic_status=1 00:19:22.395 18:06:11 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:22.395 18:06:11 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:22.395 Adding namespace failed - expected result. 00:19:22.395 18:06:11 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:22.395 test case2: host connect to nvmf target in multiple paths 00:19:22.395 18:06:11 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:22.395 18:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.395 18:06:11 -- common/autotest_common.sh@10 -- # set +x 00:19:22.395 [2024-04-15 18:06:11.303662] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:22.395 18:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.395 18:06:11 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:23.334 18:06:11 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:23.902 18:06:12 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:23.902 18:06:12 -- common/autotest_common.sh@1184 -- # local i=0 00:19:23.903 18:06:12 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:23.903 18:06:12 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:23.903 18:06:12 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:25.817 18:06:14 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:25.817 18:06:14 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:25.817 18:06:14 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:25.817 18:06:14 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:25.817 18:06:14 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:25.817 18:06:14 -- common/autotest_common.sh@1194 -- # return 0 00:19:25.817 18:06:14 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:25.817 [global] 00:19:25.817 thread=1 00:19:25.817 invalidate=1 00:19:25.817 rw=write 00:19:25.817 time_based=1 00:19:25.817 runtime=1 00:19:25.817 ioengine=libaio 00:19:25.817 direct=1 00:19:25.817 bs=4096 00:19:25.817 iodepth=1 00:19:25.817 norandommap=0 00:19:25.817 numjobs=1 00:19:25.817 00:19:25.817 verify_dump=1 00:19:25.817 verify_backlog=512 00:19:25.817 verify_state_save=0 00:19:25.817 do_verify=1 00:19:25.817 verify=crc32c-intel 00:19:25.817 [job0] 00:19:25.817 filename=/dev/nvme0n1 00:19:25.817 Could not set queue depth (nvme0n1) 00:19:26.076 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:26.076 fio-3.35 00:19:26.076 Starting 1 thread 00:19:27.455 00:19:27.455 job0: (groupid=0, jobs=1): err= 0: pid=3332132: Mon Apr 15 18:06:16 2024 00:19:27.455 read: IOPS=21, BW=86.7KiB/s (88.8kB/s)(88.0KiB/1015msec) 00:19:27.455 slat (nsec): min=10675, max=51393, avg=25799.50, stdev=10475.58 00:19:27.455 clat (usec): min=517, max=41457, avg=39152.68, stdev=8630.35 00:19:27.455 lat (usec): min=538, max=41475, avg=39178.48, stdev=8631.22 00:19:27.455 clat percentiles (usec): 00:19:27.455 | 1.00th=[ 519], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:27.455 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:27.455 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:27.455 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:27.455 | 99.99th=[41681] 00:19:27.455 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:19:27.455 slat (nsec): min=8221, max=89364, avg=20445.79, stdev=12737.13 00:19:27.455 clat (usec): min=170, max=511, avg=272.81, stdev=65.89 00:19:27.455 lat (usec): min=180, max=560, avg=293.25, stdev=71.42 00:19:27.455 clat percentiles (usec): 00:19:27.455 | 1.00th=[ 184], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 225], 00:19:27.455 | 30.00th=[ 237], 40.00th=[ 247], 50.00th=[ 260], 60.00th=[ 273], 00:19:27.455 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 363], 95.00th=[ 424], 00:19:27.455 | 99.00th=[ 490], 99.50th=[ 502], 99.90th=[ 510], 99.95th=[ 510], 00:19:27.455 | 99.99th=[ 510] 00:19:27.455 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:27.455 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:27.455 lat (usec) : 250=41.95%, 500=53.37%, 750=0.75% 00:19:27.455 lat (msec) : 50=3.93% 00:19:27.455 cpu : usr=0.30%, sys=1.18%, ctx=536, majf=0, minf=2 00:19:27.455 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.455 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.455 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:27.455 00:19:27.455 Run status group 0 (all jobs): 00:19:27.455 READ: bw=86.7KiB/s (88.8kB/s), 86.7KiB/s-86.7KiB/s (88.8kB/s-88.8kB/s), io=88.0KiB (90.1kB), run=1015-1015msec 00:19:27.456 WRITE: bw=2018KiB/s (2066kB/s), 2018KiB/s-2018KiB/s (2066kB/s-2066kB/s), io=2048KiB (2097kB), run=1015-1015msec 00:19:27.456 00:19:27.456 Disk stats (read/write): 00:19:27.456 nvme0n1: ios=60/512, merge=0/0, ticks=965/117, in_queue=1082, util=95.59% 00:19:27.456 18:06:16 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:27.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:27.456 18:06:16 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:27.456 18:06:16 -- common/autotest_common.sh@1205 -- # local i=0 00:19:27.456 18:06:16 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:27.456 18:06:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:27.456 18:06:16 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:27.456 18:06:16 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:27.456 18:06:16 -- common/autotest_common.sh@1217 -- # return 0 00:19:27.456 18:06:16 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:27.456 18:06:16 -- target/nmic.sh@53 -- # nvmftestfini 00:19:27.456 18:06:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:27.456 18:06:16 -- nvmf/common.sh@117 -- # sync 00:19:27.456 18:06:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:27.456 18:06:16 -- nvmf/common.sh@120 -- # set +e 00:19:27.456 18:06:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:27.456 18:06:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:27.456 rmmod nvme_tcp 00:19:27.456 rmmod nvme_fabrics 00:19:27.456 rmmod nvme_keyring 00:19:27.456 18:06:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:27.456 18:06:16 -- nvmf/common.sh@124 -- # set -e 00:19:27.456 18:06:16 -- nvmf/common.sh@125 -- # return 0 00:19:27.456 18:06:16 -- nvmf/common.sh@478 -- # '[' -n 3331617 ']' 00:19:27.456 18:06:16 -- nvmf/common.sh@479 -- # killprocess 3331617 00:19:27.456 18:06:16 -- common/autotest_common.sh@936 -- # '[' -z 3331617 ']' 00:19:27.456 18:06:16 -- common/autotest_common.sh@940 -- # kill -0 3331617 00:19:27.456 18:06:16 -- common/autotest_common.sh@941 -- # uname 00:19:27.456 18:06:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:27.456 18:06:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3331617 00:19:27.456 18:06:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:27.456 18:06:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:27.456 18:06:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3331617' 00:19:27.456 killing process with pid 3331617 00:19:27.456 18:06:16 -- common/autotest_common.sh@955 -- # kill 3331617 00:19:27.456 18:06:16 -- common/autotest_common.sh@960 -- # wait 3331617 00:19:27.714 18:06:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:27.714 18:06:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:27.714 18:06:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:27.714 18:06:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:27.714 18:06:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:27.714 18:06:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.714 18:06:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.714 18:06:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.655 18:06:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:29.655 00:19:29.655 real 0m10.227s 00:19:29.655 user 0m22.394s 00:19:29.655 sys 0m2.690s 00:19:29.655 18:06:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:29.656 18:06:18 -- common/autotest_common.sh@10 -- # set +x 00:19:29.656 ************************************ 00:19:29.656 END TEST nvmf_nmic 00:19:29.656 ************************************ 00:19:29.656 18:06:18 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:29.656 18:06:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:29.656 18:06:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:29.656 18:06:18 -- common/autotest_common.sh@10 -- # set +x 00:19:29.914 ************************************ 00:19:29.914 START TEST nvmf_fio_target 00:19:29.914 ************************************ 00:19:29.914 18:06:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:29.914 * Looking for test storage... 00:19:29.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:29.914 18:06:18 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:29.914 18:06:18 -- nvmf/common.sh@7 -- # uname -s 00:19:29.914 18:06:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.914 18:06:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.914 18:06:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.914 18:06:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.914 18:06:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.914 18:06:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.914 18:06:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.914 18:06:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.914 18:06:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.914 18:06:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.914 18:06:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:29.914 18:06:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:29.914 18:06:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.914 18:06:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.914 18:06:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:29.914 18:06:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:29.914 18:06:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:29.914 18:06:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.914 18:06:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.914 18:06:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.914 18:06:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.914 18:06:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.915 18:06:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.915 18:06:18 -- paths/export.sh@5 -- # export PATH 00:19:29.915 18:06:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.915 18:06:18 -- nvmf/common.sh@47 -- # : 0 00:19:29.915 18:06:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:29.915 18:06:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:29.915 18:06:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:29.915 18:06:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.915 18:06:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.915 18:06:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:29.915 18:06:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:29.915 18:06:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:29.915 18:06:18 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:29.915 18:06:18 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:29.915 18:06:18 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:29.915 18:06:18 -- target/fio.sh@16 -- # nvmftestinit 00:19:29.915 18:06:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:29.915 18:06:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:29.915 18:06:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:29.915 18:06:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:29.915 18:06:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:29.915 18:06:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.915 18:06:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:29.915 18:06:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.915 18:06:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:29.915 18:06:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:29.915 18:06:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:29.915 18:06:18 -- common/autotest_common.sh@10 -- # set +x 00:19:32.453 18:06:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:32.453 18:06:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:32.453 18:06:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:32.453 18:06:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:32.453 18:06:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:32.453 18:06:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:32.453 18:06:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:32.453 18:06:20 -- nvmf/common.sh@295 -- # net_devs=() 00:19:32.453 18:06:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:32.453 18:06:20 -- nvmf/common.sh@296 -- # e810=() 00:19:32.453 18:06:20 -- nvmf/common.sh@296 -- # local -ga e810 00:19:32.453 18:06:20 -- nvmf/common.sh@297 -- # x722=() 00:19:32.453 18:06:20 -- nvmf/common.sh@297 -- # local -ga x722 00:19:32.453 18:06:20 -- nvmf/common.sh@298 -- # mlx=() 00:19:32.453 18:06:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:32.453 18:06:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:32.453 18:06:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:32.453 18:06:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:32.453 18:06:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:32.453 18:06:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:32.453 18:06:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:32.453 18:06:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:32.453 18:06:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:32.453 18:06:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:32.453 18:06:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:32.453 18:06:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:32.453 18:06:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:32.453 18:06:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:32.453 18:06:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:32.453 18:06:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:32.453 18:06:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:32.453 18:06:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:32.453 18:06:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:32.453 18:06:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:32.453 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:32.453 18:06:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:32.453 18:06:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:32.453 18:06:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.453 18:06:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.453 18:06:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:32.453 18:06:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:32.453 18:06:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:32.453 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:32.453 18:06:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:32.453 18:06:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:32.453 18:06:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.453 18:06:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.453 18:06:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:32.453 18:06:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:32.453 18:06:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:32.453 18:06:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:32.453 18:06:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:32.453 18:06:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.453 18:06:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:32.453 18:06:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.453 18:06:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:32.453 Found net devices under 0000:84:00.0: cvl_0_0 00:19:32.453 18:06:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.453 18:06:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:32.453 18:06:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.453 18:06:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:32.453 18:06:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.453 18:06:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:32.453 Found net devices under 0000:84:00.1: cvl_0_1 00:19:32.453 18:06:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.453 18:06:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:32.453 18:06:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:32.453 18:06:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:32.453 18:06:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:32.453 18:06:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:32.453 18:06:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.453 18:06:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:32.453 18:06:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:32.453 18:06:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:32.453 18:06:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:32.453 18:06:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:32.453 18:06:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:32.453 18:06:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:32.453 18:06:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.453 18:06:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:32.453 18:06:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:32.453 18:06:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:32.453 18:06:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:32.453 18:06:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:32.453 18:06:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:32.453 18:06:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:32.453 18:06:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:32.453 18:06:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:32.453 18:06:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:32.453 18:06:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:32.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:19:32.453 00:19:32.453 --- 10.0.0.2 ping statistics --- 00:19:32.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.453 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:19:32.453 18:06:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:32.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:19:32.453 00:19:32.453 --- 10.0.0.1 ping statistics --- 00:19:32.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.453 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:19:32.453 18:06:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.453 18:06:21 -- nvmf/common.sh@411 -- # return 0 00:19:32.453 18:06:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:32.453 18:06:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.453 18:06:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:32.453 18:06:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:32.453 18:06:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.453 18:06:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:32.453 18:06:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:32.453 18:06:21 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:32.453 18:06:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:32.453 18:06:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:32.453 18:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:32.453 18:06:21 -- nvmf/common.sh@470 -- # nvmfpid=3334344 00:19:32.453 18:06:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:32.453 18:06:21 -- nvmf/common.sh@471 -- # waitforlisten 3334344 00:19:32.453 18:06:21 -- common/autotest_common.sh@817 -- # '[' -z 3334344 ']' 00:19:32.453 18:06:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.453 18:06:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:32.453 18:06:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.453 18:06:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:32.453 18:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:32.453 [2024-04-15 18:06:21.087003] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:19:32.453 [2024-04-15 18:06:21.087100] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.453 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.453 [2024-04-15 18:06:21.169480] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:32.453 [2024-04-15 18:06:21.267241] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.453 [2024-04-15 18:06:21.267311] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.453 [2024-04-15 18:06:21.267330] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.453 [2024-04-15 18:06:21.267344] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.453 [2024-04-15 18:06:21.267357] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.453 [2024-04-15 18:06:21.267428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.453 [2024-04-15 18:06:21.267489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.453 [2024-04-15 18:06:21.267515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:32.453 [2024-04-15 18:06:21.267519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.453 18:06:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:32.453 18:06:21 -- common/autotest_common.sh@850 -- # return 0 00:19:32.453 18:06:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:32.453 18:06:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:32.453 18:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:32.711 18:06:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.711 18:06:21 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:32.969 [2024-04-15 18:06:21.738030] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.970 18:06:21 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:33.228 18:06:22 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:33.228 18:06:22 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:33.487 18:06:22 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:33.487 18:06:22 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:34.053 18:06:22 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:34.053 18:06:22 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:34.619 18:06:23 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:34.619 18:06:23 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:34.878 18:06:23 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:35.136 18:06:23 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:35.136 18:06:23 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:35.394 18:06:24 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:35.394 18:06:24 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:35.653 18:06:24 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:35.653 18:06:24 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:36.221 18:06:25 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:36.479 18:06:25 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:36.479 18:06:25 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:36.737 18:06:25 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:36.737 18:06:25 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:37.306 18:06:26 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:37.566 [2024-04-15 18:06:26.480648] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.566 18:06:26 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:38.133 18:06:26 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:38.391 18:06:27 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:38.958 18:06:27 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:38.958 18:06:27 -- common/autotest_common.sh@1184 -- # local i=0 00:19:38.958 18:06:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:38.958 18:06:27 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:19:38.958 18:06:27 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:19:38.958 18:06:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:40.860 18:06:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:40.860 18:06:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:40.860 18:06:29 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:40.860 18:06:29 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:19:40.860 18:06:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:40.860 18:06:29 -- common/autotest_common.sh@1194 -- # return 0 00:19:40.860 18:06:29 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:40.860 [global] 00:19:40.860 thread=1 00:19:40.860 invalidate=1 00:19:40.860 rw=write 00:19:40.860 time_based=1 00:19:40.860 runtime=1 00:19:40.860 ioengine=libaio 00:19:40.860 direct=1 00:19:40.860 bs=4096 00:19:40.860 iodepth=1 00:19:40.860 norandommap=0 00:19:40.860 numjobs=1 00:19:40.860 00:19:40.860 verify_dump=1 00:19:40.860 verify_backlog=512 00:19:40.860 verify_state_save=0 00:19:40.860 do_verify=1 00:19:40.860 verify=crc32c-intel 00:19:40.860 [job0] 00:19:40.860 filename=/dev/nvme0n1 00:19:40.860 [job1] 00:19:40.860 filename=/dev/nvme0n2 00:19:40.860 [job2] 00:19:40.860 filename=/dev/nvme0n3 00:19:40.860 [job3] 00:19:40.860 filename=/dev/nvme0n4 00:19:40.860 Could not set queue depth (nvme0n1) 00:19:40.860 Could not set queue depth (nvme0n2) 00:19:40.860 Could not set queue depth (nvme0n3) 00:19:40.860 Could not set queue depth (nvme0n4) 00:19:41.119 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:41.119 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:41.119 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:41.119 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:41.119 fio-3.35 00:19:41.119 Starting 4 threads 00:19:42.496 00:19:42.496 job0: (groupid=0, jobs=1): err= 0: pid=3335438: Mon Apr 15 18:06:31 2024 00:19:42.496 read: IOPS=179, BW=719KiB/s (737kB/s)(720KiB/1001msec) 00:19:42.496 slat (nsec): min=5556, max=15720, avg=11070.75, stdev=3933.76 00:19:42.496 clat (usec): min=271, max=41382, avg=4827.83, stdev=12746.20 00:19:42.496 lat (usec): min=278, max=41390, avg=4838.90, stdev=12747.33 00:19:42.496 clat percentiles (usec): 00:19:42.496 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 297], 00:19:42.496 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 326], 00:19:42.496 | 70.00th=[ 330], 80.00th=[ 355], 90.00th=[40633], 95.00th=[41157], 00:19:42.496 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:42.496 | 99.99th=[41157] 00:19:42.496 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:42.496 slat (nsec): min=6839, max=40578, avg=8509.54, stdev=2071.75 00:19:42.496 clat (usec): min=177, max=791, avg=240.21, stdev=37.50 00:19:42.496 lat (usec): min=185, max=799, avg=248.72, stdev=37.79 00:19:42.496 clat percentiles (usec): 00:19:42.496 | 1.00th=[ 186], 5.00th=[ 200], 10.00th=[ 212], 20.00th=[ 225], 00:19:42.496 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 239], 60.00th=[ 241], 00:19:42.496 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 277], 00:19:42.496 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 791], 99.95th=[ 791], 00:19:42.496 | 99.99th=[ 791] 00:19:42.496 bw ( KiB/s): min= 4096, max= 4096, per=21.78%, avg=4096.00, stdev= 0.00, samples=1 00:19:42.496 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:42.496 lat (usec) : 250=57.37%, 500=38.87%, 750=0.58%, 1000=0.14% 00:19:42.496 lat (msec) : 4=0.14%, 50=2.89% 00:19:42.496 cpu : usr=0.20%, sys=0.70%, ctx=692, majf=0, minf=2 00:19:42.496 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:42.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.496 issued rwts: total=180,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.496 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:42.496 job1: (groupid=0, jobs=1): err= 0: pid=3335450: Mon Apr 15 18:06:31 2024 00:19:42.496 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:19:42.496 slat (nsec): min=5251, max=53160, avg=7709.53, stdev=2277.39 00:19:42.496 clat (usec): min=252, max=3987, avg=354.89, stdev=195.28 00:19:42.497 lat (usec): min=259, max=4002, avg=362.60, stdev=195.73 00:19:42.497 clat percentiles (usec): 00:19:42.497 | 1.00th=[ 262], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 281], 00:19:42.497 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 343], 60.00th=[ 367], 00:19:42.497 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 429], 95.00th=[ 445], 00:19:42.497 | 99.00th=[ 523], 99.50th=[ 578], 99.90th=[ 3982], 99.95th=[ 3982], 00:19:42.497 | 99.99th=[ 3982] 00:19:42.497 write: IOPS=2036, BW=8148KiB/s (8343kB/s)(8156KiB/1001msec); 0 zone resets 00:19:42.497 slat (usec): min=7, max=1555, avg=10.27, stdev=34.26 00:19:42.497 clat (usec): min=164, max=355, avg=202.57, stdev=21.49 00:19:42.497 lat (usec): min=173, max=1803, avg=212.84, stdev=41.45 00:19:42.497 clat percentiles (usec): 00:19:42.497 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:19:42.497 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 200], 00:19:42.497 | 70.00th=[ 206], 80.00th=[ 217], 90.00th=[ 239], 95.00th=[ 245], 00:19:42.497 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 343], 99.95th=[ 351], 00:19:42.497 | 99.99th=[ 355] 00:19:42.497 bw ( KiB/s): min= 8192, max= 8192, per=43.56%, avg=8192.00, stdev= 0.00, samples=1 00:19:42.497 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:42.497 lat (usec) : 250=55.36%, 500=44.20%, 750=0.34% 00:19:42.497 lat (msec) : 4=0.11% 00:19:42.497 cpu : usr=2.70%, sys=3.30%, ctx=3579, majf=0, minf=1 00:19:42.497 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:42.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.497 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.497 issued rwts: total=1536,2039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.497 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:42.497 job2: (groupid=0, jobs=1): err= 0: pid=3335486: Mon Apr 15 18:06:31 2024 00:19:42.497 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:19:42.497 slat (nsec): min=6714, max=37513, avg=7862.11, stdev=2231.62 00:19:42.497 clat (usec): min=264, max=41031, avg=399.21, stdev=1042.31 00:19:42.497 lat (usec): min=273, max=41042, avg=407.07, stdev=1042.42 00:19:42.497 clat percentiles (usec): 00:19:42.497 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 318], 20.00th=[ 330], 00:19:42.497 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 371], 60.00th=[ 383], 00:19:42.497 | 70.00th=[ 396], 80.00th=[ 408], 90.00th=[ 437], 95.00th=[ 453], 00:19:42.497 | 99.00th=[ 523], 99.50th=[ 578], 99.90th=[ 3785], 99.95th=[41157], 00:19:42.497 | 99.99th=[41157] 00:19:42.497 write: IOPS=1641, BW=6565KiB/s (6723kB/s)(6572KiB/1001msec); 0 zone resets 00:19:42.497 slat (nsec): min=8381, max=62525, avg=10001.56, stdev=3346.51 00:19:42.497 clat (usec): min=178, max=447, avg=213.00, stdev=21.99 00:19:42.497 lat (usec): min=187, max=457, avg=223.00, stdev=23.09 00:19:42.497 clat percentiles (usec): 00:19:42.497 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:19:42.497 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 215], 00:19:42.497 | 70.00th=[ 221], 80.00th=[ 229], 90.00th=[ 239], 95.00th=[ 245], 00:19:42.497 | 99.00th=[ 297], 99.50th=[ 326], 99.90th=[ 392], 99.95th=[ 449], 00:19:42.497 | 99.99th=[ 449] 00:19:42.497 bw ( KiB/s): min= 8192, max= 8192, per=43.56%, avg=8192.00, stdev= 0.00, samples=1 00:19:42.497 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:42.497 lat (usec) : 250=49.83%, 500=49.42%, 750=0.69% 00:19:42.497 lat (msec) : 4=0.03%, 50=0.03% 00:19:42.497 cpu : usr=3.10%, sys=2.40%, ctx=3182, majf=0, minf=1 00:19:42.497 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:42.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.497 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.497 issued rwts: total=1536,1643,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.497 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:42.497 job3: (groupid=0, jobs=1): err= 0: pid=3335500: Mon Apr 15 18:06:31 2024 00:19:42.497 read: IOPS=269, BW=1079KiB/s (1105kB/s)(1080KiB/1001msec) 00:19:42.497 slat (nsec): min=5916, max=29336, avg=9848.04, stdev=4259.00 00:19:42.497 clat (usec): min=274, max=41006, avg=3195.89, stdev=10344.39 00:19:42.497 lat (usec): min=280, max=41021, avg=3205.74, stdev=10345.58 00:19:42.497 clat percentiles (usec): 00:19:42.497 | 1.00th=[ 277], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 297], 00:19:42.497 | 30.00th=[ 314], 40.00th=[ 338], 50.00th=[ 355], 60.00th=[ 367], 00:19:42.497 | 70.00th=[ 396], 80.00th=[ 424], 90.00th=[ 453], 95.00th=[41157], 00:19:42.497 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:42.497 | 99.99th=[41157] 00:19:42.497 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:42.497 slat (nsec): min=6856, max=56556, avg=9255.79, stdev=4843.94 00:19:42.497 clat (usec): min=195, max=1296, avg=249.96, stdev=60.91 00:19:42.497 lat (usec): min=203, max=1306, avg=259.22, stdev=61.80 00:19:42.497 clat percentiles (usec): 00:19:42.497 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 227], 00:19:42.497 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 245], 00:19:42.497 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 326], 00:19:42.497 | 99.00th=[ 396], 99.50th=[ 474], 99.90th=[ 1303], 99.95th=[ 1303], 00:19:42.497 | 99.99th=[ 1303] 00:19:42.497 bw ( KiB/s): min= 4096, max= 4096, per=21.78%, avg=4096.00, stdev= 0.00, samples=1 00:19:42.497 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:42.497 lat (usec) : 250=46.04%, 500=50.90%, 750=0.38% 00:19:42.497 lat (msec) : 2=0.26%, 50=2.43% 00:19:42.497 cpu : usr=0.20%, sys=0.80%, ctx=783, majf=0, minf=1 00:19:42.497 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:42.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.497 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.497 issued rwts: total=270,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.497 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:42.497 00:19:42.497 Run status group 0 (all jobs): 00:19:42.497 READ: bw=13.7MiB/s (14.4MB/s), 719KiB/s-6138KiB/s (737kB/s-6285kB/s), io=13.8MiB (14.4MB), run=1001-1001msec 00:19:42.497 WRITE: bw=18.4MiB/s (19.3MB/s), 2046KiB/s-8148KiB/s (2095kB/s-8343kB/s), io=18.4MiB (19.3MB), run=1001-1001msec 00:19:42.497 00:19:42.497 Disk stats (read/write): 00:19:42.497 nvme0n1: ios=67/512, merge=0/0, ticks=731/115, in_queue=846, util=85.27% 00:19:42.497 nvme0n2: ios=1441/1536, merge=0/0, ticks=577/304, in_queue=881, util=91.52% 00:19:42.497 nvme0n3: ios=1265/1536, merge=0/0, ticks=1456/306, in_queue=1762, util=92.91% 00:19:42.497 nvme0n4: ios=75/512, merge=0/0, ticks=793/124, in_queue=917, util=95.92% 00:19:42.497 18:06:31 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:42.497 [global] 00:19:42.497 thread=1 00:19:42.497 invalidate=1 00:19:42.497 rw=randwrite 00:19:42.497 time_based=1 00:19:42.497 runtime=1 00:19:42.497 ioengine=libaio 00:19:42.497 direct=1 00:19:42.497 bs=4096 00:19:42.497 iodepth=1 00:19:42.497 norandommap=0 00:19:42.497 numjobs=1 00:19:42.497 00:19:42.497 verify_dump=1 00:19:42.497 verify_backlog=512 00:19:42.497 verify_state_save=0 00:19:42.497 do_verify=1 00:19:42.497 verify=crc32c-intel 00:19:42.497 [job0] 00:19:42.497 filename=/dev/nvme0n1 00:19:42.497 [job1] 00:19:42.497 filename=/dev/nvme0n2 00:19:42.497 [job2] 00:19:42.497 filename=/dev/nvme0n3 00:19:42.497 [job3] 00:19:42.497 filename=/dev/nvme0n4 00:19:42.497 Could not set queue depth (nvme0n1) 00:19:42.497 Could not set queue depth (nvme0n2) 00:19:42.497 Could not set queue depth (nvme0n3) 00:19:42.497 Could not set queue depth (nvme0n4) 00:19:42.756 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:42.756 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:42.756 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:42.756 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:42.756 fio-3.35 00:19:42.756 Starting 4 threads 00:19:44.131 00:19:44.131 job0: (groupid=0, jobs=1): err= 0: pid=3335779: Mon Apr 15 18:06:32 2024 00:19:44.131 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:19:44.131 slat (nsec): min=5643, max=34325, avg=10115.99, stdev=3119.19 00:19:44.131 clat (usec): min=292, max=41009, avg=1455.11, stdev=6616.23 00:19:44.131 lat (usec): min=300, max=41023, avg=1465.23, stdev=6616.81 00:19:44.131 clat percentiles (usec): 00:19:44.131 | 1.00th=[ 297], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 310], 00:19:44.131 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 338], 00:19:44.131 | 70.00th=[ 355], 80.00th=[ 375], 90.00th=[ 424], 95.00th=[ 469], 00:19:44.131 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:44.131 | 99.99th=[41157] 00:19:44.131 write: IOPS=909, BW=3636KiB/s (3724kB/s)(3640KiB/1001msec); 0 zone resets 00:19:44.131 slat (nsec): min=8994, max=47478, avg=13733.03, stdev=6593.96 00:19:44.131 clat (usec): min=174, max=510, avg=255.67, stdev=67.16 00:19:44.131 lat (usec): min=185, max=543, avg=269.41, stdev=69.92 00:19:44.131 clat percentiles (usec): 00:19:44.131 | 1.00th=[ 184], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 208], 00:19:44.131 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 239], 00:19:44.131 | 70.00th=[ 255], 80.00th=[ 297], 90.00th=[ 379], 95.00th=[ 404], 00:19:44.131 | 99.00th=[ 461], 99.50th=[ 482], 99.90th=[ 510], 99.95th=[ 510], 00:19:44.131 | 99.99th=[ 510] 00:19:44.131 bw ( KiB/s): min= 4096, max= 4096, per=26.69%, avg=4096.00, stdev= 0.00, samples=1 00:19:44.131 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:44.131 lat (usec) : 250=42.90%, 500=55.56%, 750=0.49%, 1000=0.07% 00:19:44.131 lat (msec) : 50=0.98% 00:19:44.131 cpu : usr=1.10%, sys=1.80%, ctx=1424, majf=0, minf=1 00:19:44.131 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:44.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.131 issued rwts: total=512,910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.131 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:44.131 job1: (groupid=0, jobs=1): err= 0: pid=3335780: Mon Apr 15 18:06:32 2024 00:19:44.131 read: IOPS=26, BW=108KiB/s (110kB/s)(112KiB/1038msec) 00:19:44.131 slat (nsec): min=9043, max=42133, avg=16126.04, stdev=5646.47 00:19:44.131 clat (usec): min=338, max=41992, avg=32349.73, stdev=16993.28 00:19:44.131 lat (usec): min=354, max=42008, avg=32365.86, stdev=16992.28 00:19:44.131 clat percentiles (usec): 00:19:44.131 | 1.00th=[ 338], 5.00th=[ 351], 10.00th=[ 408], 20.00th=[ 449], 00:19:44.131 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:44.131 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:19:44.131 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:44.131 | 99.99th=[42206] 00:19:44.131 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:19:44.131 slat (usec): min=8, max=106, avg=11.21, stdev= 7.52 00:19:44.131 clat (usec): min=189, max=556, avg=243.18, stdev=50.40 00:19:44.131 lat (usec): min=199, max=617, avg=254.39, stdev=53.85 00:19:44.131 clat percentiles (usec): 00:19:44.131 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 210], 00:19:44.131 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:19:44.131 | 70.00th=[ 243], 80.00th=[ 265], 90.00th=[ 306], 95.00th=[ 338], 00:19:44.131 | 99.00th=[ 461], 99.50th=[ 510], 99.90th=[ 553], 99.95th=[ 553], 00:19:44.131 | 99.99th=[ 553] 00:19:44.131 bw ( KiB/s): min= 4096, max= 4096, per=26.69%, avg=4096.00, stdev= 0.00, samples=1 00:19:44.131 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:44.131 lat (usec) : 250=70.56%, 500=24.81%, 750=0.56% 00:19:44.131 lat (msec) : 50=4.07% 00:19:44.131 cpu : usr=0.29%, sys=0.58%, ctx=541, majf=0, minf=2 00:19:44.131 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:44.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.131 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.131 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:44.131 job2: (groupid=0, jobs=1): err= 0: pid=3335781: Mon Apr 15 18:06:32 2024 00:19:44.131 read: IOPS=993, BW=3973KiB/s (4068kB/s)(4124KiB/1038msec) 00:19:44.131 slat (nsec): min=6034, max=64303, avg=12262.24, stdev=4681.40 00:19:44.131 clat (usec): min=264, max=41025, avg=621.83, stdev=3336.48 00:19:44.131 lat (usec): min=271, max=41041, avg=634.10, stdev=3336.62 00:19:44.131 clat percentiles (usec): 00:19:44.131 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 302], 00:19:44.131 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 326], 00:19:44.131 | 70.00th=[ 334], 80.00th=[ 359], 90.00th=[ 494], 95.00th=[ 506], 00:19:44.131 | 99.00th=[ 578], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:44.131 | 99.99th=[41157] 00:19:44.131 write: IOPS=1479, BW=5919KiB/s (6061kB/s)(6144KiB/1038msec); 0 zone resets 00:19:44.131 slat (nsec): min=6971, max=90241, avg=11152.26, stdev=4973.37 00:19:44.131 clat (usec): min=173, max=1307, avg=233.71, stdev=60.40 00:19:44.131 lat (usec): min=182, max=1318, avg=244.86, stdev=61.83 00:19:44.131 clat percentiles (usec): 00:19:44.131 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 196], 00:19:44.131 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 229], 00:19:44.131 | 70.00th=[ 241], 80.00th=[ 255], 90.00th=[ 285], 95.00th=[ 343], 00:19:44.131 | 99.00th=[ 429], 99.50th=[ 437], 99.90th=[ 865], 99.95th=[ 1303], 00:19:44.131 | 99.99th=[ 1303] 00:19:44.131 bw ( KiB/s): min= 4096, max= 8192, per=40.04%, avg=6144.00, stdev=2896.31, samples=2 00:19:44.131 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:19:44.131 lat (usec) : 250=46.16%, 500=50.60%, 750=2.84%, 1000=0.04% 00:19:44.131 lat (msec) : 2=0.08%, 50=0.27% 00:19:44.131 cpu : usr=1.54%, sys=2.80%, ctx=2569, majf=0, minf=1 00:19:44.131 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:44.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.131 issued rwts: total=1031,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.131 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:44.131 job3: (groupid=0, jobs=1): err= 0: pid=3335782: Mon Apr 15 18:06:32 2024 00:19:44.132 read: IOPS=524, BW=2098KiB/s (2148kB/s)(2108KiB/1005msec) 00:19:44.132 slat (nsec): min=8649, max=31145, avg=10484.35, stdev=2313.52 00:19:44.132 clat (usec): min=281, max=41942, avg=1371.49, stdev=6399.02 00:19:44.132 lat (usec): min=292, max=41959, avg=1381.97, stdev=6399.83 00:19:44.132 clat percentiles (usec): 00:19:44.132 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 310], 00:19:44.132 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 318], 60.00th=[ 322], 00:19:44.132 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 343], 95.00th=[ 359], 00:19:44.132 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:44.132 | 99.99th=[41681] 00:19:44.132 write: IOPS=1018, BW=4076KiB/s (4173kB/s)(4096KiB/1005msec); 0 zone resets 00:19:44.132 slat (nsec): min=7553, max=44546, avg=12740.06, stdev=4164.31 00:19:44.132 clat (usec): min=186, max=471, avg=251.69, stdev=58.88 00:19:44.132 lat (usec): min=198, max=495, avg=264.43, stdev=59.82 00:19:44.132 clat percentiles (usec): 00:19:44.132 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:19:44.132 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 231], 60.00th=[ 243], 00:19:44.132 | 70.00th=[ 265], 80.00th=[ 297], 90.00th=[ 334], 95.00th=[ 388], 00:19:44.132 | 99.00th=[ 433], 99.50th=[ 437], 99.90th=[ 453], 99.95th=[ 474], 00:19:44.132 | 99.99th=[ 474] 00:19:44.132 bw ( KiB/s): min= 8192, max= 8192, per=53.39%, avg=8192.00, stdev= 0.00, samples=1 00:19:44.132 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:44.132 lat (usec) : 250=41.52%, 500=57.38%, 750=0.19% 00:19:44.132 lat (msec) : 50=0.90% 00:19:44.132 cpu : usr=1.10%, sys=1.69%, ctx=1554, majf=0, minf=1 00:19:44.132 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:44.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.132 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.132 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:44.132 00:19:44.132 Run status group 0 (all jobs): 00:19:44.132 READ: bw=8085KiB/s (8279kB/s), 108KiB/s-3973KiB/s (110kB/s-4068kB/s), io=8392KiB (8593kB), run=1001-1038msec 00:19:44.132 WRITE: bw=15.0MiB/s (15.7MB/s), 1973KiB/s-5919KiB/s (2020kB/s-6061kB/s), io=15.6MiB (16.3MB), run=1001-1038msec 00:19:44.132 00:19:44.132 Disk stats (read/write): 00:19:44.132 nvme0n1: ios=403/512, merge=0/0, ticks=717/127, in_queue=844, util=85.17% 00:19:44.132 nvme0n2: ios=68/512, merge=0/0, ticks=785/122, in_queue=907, util=89.49% 00:19:44.132 nvme0n3: ios=1078/1536, merge=0/0, ticks=857/352, in_queue=1209, util=97.99% 00:19:44.132 nvme0n4: ios=580/1024, merge=0/0, ticks=1392/249, in_queue=1641, util=96.36% 00:19:44.132 18:06:32 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:44.132 [global] 00:19:44.132 thread=1 00:19:44.132 invalidate=1 00:19:44.132 rw=write 00:19:44.132 time_based=1 00:19:44.132 runtime=1 00:19:44.132 ioengine=libaio 00:19:44.132 direct=1 00:19:44.132 bs=4096 00:19:44.132 iodepth=128 00:19:44.132 norandommap=0 00:19:44.132 numjobs=1 00:19:44.132 00:19:44.132 verify_dump=1 00:19:44.132 verify_backlog=512 00:19:44.132 verify_state_save=0 00:19:44.132 do_verify=1 00:19:44.132 verify=crc32c-intel 00:19:44.132 [job0] 00:19:44.132 filename=/dev/nvme0n1 00:19:44.132 [job1] 00:19:44.132 filename=/dev/nvme0n2 00:19:44.132 [job2] 00:19:44.132 filename=/dev/nvme0n3 00:19:44.132 [job3] 00:19:44.132 filename=/dev/nvme0n4 00:19:44.132 Could not set queue depth (nvme0n1) 00:19:44.132 Could not set queue depth (nvme0n2) 00:19:44.132 Could not set queue depth (nvme0n3) 00:19:44.132 Could not set queue depth (nvme0n4) 00:19:44.132 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:44.132 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:44.132 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:44.132 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:44.132 fio-3.35 00:19:44.132 Starting 4 threads 00:19:45.539 00:19:45.539 job0: (groupid=0, jobs=1): err= 0: pid=3336011: Mon Apr 15 18:06:34 2024 00:19:45.539 read: IOPS=3292, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1002msec) 00:19:45.539 slat (usec): min=2, max=15289, avg=138.99, stdev=993.92 00:19:45.539 clat (usec): min=621, max=54434, avg=17056.61, stdev=7125.14 00:19:45.539 lat (usec): min=2168, max=54449, avg=17195.60, stdev=7198.01 00:19:45.539 clat percentiles (usec): 00:19:45.539 | 1.00th=[ 3949], 5.00th=[ 8979], 10.00th=[10421], 20.00th=[13435], 00:19:45.539 | 30.00th=[14353], 40.00th=[15270], 50.00th=[15664], 60.00th=[16581], 00:19:45.539 | 70.00th=[17171], 80.00th=[19006], 90.00th=[24511], 95.00th=[32113], 00:19:45.539 | 99.00th=[47973], 99.50th=[48497], 99.90th=[48497], 99.95th=[51643], 00:19:45.539 | 99.99th=[54264] 00:19:45.539 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:19:45.539 slat (usec): min=3, max=20597, avg=146.14, stdev=1116.75 00:19:45.539 clat (usec): min=5080, max=62553, avg=19669.62, stdev=9762.79 00:19:45.539 lat (usec): min=5088, max=62561, avg=19815.76, stdev=9838.62 00:19:45.539 clat percentiles (usec): 00:19:45.539 | 1.00th=[ 6718], 5.00th=[ 9110], 10.00th=[11600], 20.00th=[12780], 00:19:45.539 | 30.00th=[13566], 40.00th=[15533], 50.00th=[17171], 60.00th=[19006], 00:19:45.539 | 70.00th=[20841], 80.00th=[24249], 90.00th=[33424], 95.00th=[39584], 00:19:45.539 | 99.00th=[57934], 99.50th=[62653], 99.90th=[62653], 99.95th=[62653], 00:19:45.539 | 99.99th=[62653] 00:19:45.539 bw ( KiB/s): min=12288, max=16384, per=23.49%, avg=14336.00, stdev=2896.31, samples=2 00:19:45.539 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:19:45.539 lat (usec) : 750=0.01% 00:19:45.539 lat (msec) : 4=0.49%, 10=7.48%, 20=65.51%, 50=25.09%, 100=1.41% 00:19:45.539 cpu : usr=2.30%, sys=3.40%, ctx=174, majf=0, minf=1 00:19:45.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:45.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:45.539 issued rwts: total=3299,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.539 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:45.539 job1: (groupid=0, jobs=1): err= 0: pid=3336013: Mon Apr 15 18:06:34 2024 00:19:45.539 read: IOPS=3372, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1006msec) 00:19:45.539 slat (usec): min=3, max=10906, avg=130.89, stdev=733.79 00:19:45.539 clat (usec): min=424, max=56592, avg=16373.60, stdev=7280.79 00:19:45.539 lat (usec): min=6591, max=56621, avg=16504.50, stdev=7338.19 00:19:45.539 clat percentiles (usec): 00:19:45.539 | 1.00th=[ 7111], 5.00th=[ 9634], 10.00th=[10945], 20.00th=[11731], 00:19:45.539 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12911], 60.00th=[15139], 00:19:45.539 | 70.00th=[17695], 80.00th=[21365], 90.00th=[26084], 95.00th=[30802], 00:19:45.539 | 99.00th=[46400], 99.50th=[46924], 99.90th=[56361], 99.95th=[56361], 00:19:45.539 | 99.99th=[56361] 00:19:45.539 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:19:45.539 slat (usec): min=4, max=47195, avg=148.68, stdev=1129.69 00:19:45.539 clat (usec): min=7071, max=58767, avg=17696.02, stdev=10130.26 00:19:45.539 lat (usec): min=7092, max=72381, avg=17844.69, stdev=10219.31 00:19:45.539 clat percentiles (usec): 00:19:45.539 | 1.00th=[ 7635], 5.00th=[ 8848], 10.00th=[10028], 20.00th=[11207], 00:19:45.539 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12256], 60.00th=[16319], 00:19:45.539 | 70.00th=[20055], 80.00th=[23725], 90.00th=[29492], 95.00th=[42730], 00:19:45.539 | 99.00th=[53740], 99.50th=[53740], 99.90th=[58983], 99.95th=[58983], 00:19:45.539 | 99.99th=[58983] 00:19:45.539 bw ( KiB/s): min=12344, max=16328, per=23.49%, avg=14336.00, stdev=2817.11, samples=2 00:19:45.539 iops : min= 3086, max= 4082, avg=3584.00, stdev=704.28, samples=2 00:19:45.539 lat (usec) : 500=0.01% 00:19:45.539 lat (msec) : 10=8.11%, 20=63.74%, 50=27.29%, 100=0.85% 00:19:45.539 cpu : usr=2.79%, sys=6.07%, ctx=380, majf=0, minf=1 00:19:45.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:45.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:45.539 issued rwts: total=3393,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.539 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:45.539 job2: (groupid=0, jobs=1): err= 0: pid=3336014: Mon Apr 15 18:06:34 2024 00:19:45.539 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:19:45.539 slat (usec): min=2, max=13413, avg=131.99, stdev=796.72 00:19:45.539 clat (usec): min=6909, max=40615, avg=17341.07, stdev=6485.22 00:19:45.539 lat (usec): min=6925, max=40626, avg=17473.06, stdev=6510.88 00:19:45.539 clat percentiles (usec): 00:19:45.539 | 1.00th=[ 6980], 5.00th=[11469], 10.00th=[12387], 20.00th=[13042], 00:19:45.539 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14615], 60.00th=[15533], 00:19:45.539 | 70.00th=[18220], 80.00th=[21627], 90.00th=[28443], 95.00th=[31327], 00:19:45.539 | 99.00th=[38011], 99.50th=[40109], 99.90th=[40633], 99.95th=[40633], 00:19:45.539 | 99.99th=[40633] 00:19:45.539 write: IOPS=3966, BW=15.5MiB/s (16.2MB/s)(15.6MiB/1005msec); 0 zone resets 00:19:45.539 slat (usec): min=4, max=14089, avg=124.56, stdev=680.14 00:19:45.539 clat (usec): min=4159, max=48288, avg=16357.35, stdev=7377.64 00:19:45.539 lat (usec): min=4170, max=48299, avg=16481.91, stdev=7417.80 00:19:45.539 clat percentiles (usec): 00:19:45.539 | 1.00th=[ 8160], 5.00th=[ 9634], 10.00th=[10945], 20.00th=[12518], 00:19:45.539 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13566], 60.00th=[14484], 00:19:45.539 | 70.00th=[15664], 80.00th=[18744], 90.00th=[26084], 95.00th=[34866], 00:19:45.539 | 99.00th=[43779], 99.50th=[44303], 99.90th=[48497], 99.95th=[48497], 00:19:45.539 | 99.99th=[48497] 00:19:45.539 bw ( KiB/s): min=14488, max=16384, per=25.29%, avg=15436.00, stdev=1340.67, samples=2 00:19:45.539 iops : min= 3622, max= 4096, avg=3859.00, stdev=335.17, samples=2 00:19:45.539 lat (msec) : 10=4.40%, 20=75.09%, 50=20.52% 00:19:45.539 cpu : usr=3.88%, sys=6.18%, ctx=444, majf=0, minf=1 00:19:45.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:45.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:45.539 issued rwts: total=3584,3986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.539 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:45.539 job3: (groupid=0, jobs=1): err= 0: pid=3336015: Mon Apr 15 18:06:34 2024 00:19:45.539 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:19:45.539 slat (usec): min=3, max=14254, avg=114.77, stdev=815.93 00:19:45.539 clat (usec): min=3486, max=48158, avg=15696.09, stdev=6669.82 00:19:45.539 lat (usec): min=4501, max=48174, avg=15810.86, stdev=6719.55 00:19:45.539 clat percentiles (usec): 00:19:45.539 | 1.00th=[ 6063], 5.00th=[ 8160], 10.00th=[10290], 20.00th=[11600], 00:19:45.539 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13173], 60.00th=[13960], 00:19:45.539 | 70.00th=[16909], 80.00th=[20055], 90.00th=[26084], 95.00th=[29230], 00:19:45.539 | 99.00th=[40633], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:19:45.539 | 99.99th=[47973] 00:19:45.539 write: IOPS=4188, BW=16.4MiB/s (17.2MB/s)(16.4MiB/1002msec); 0 zone resets 00:19:45.539 slat (usec): min=4, max=11636, avg=101.77, stdev=718.86 00:19:45.539 clat (usec): min=288, max=45894, avg=14767.00, stdev=7704.65 00:19:45.539 lat (usec): min=770, max=45910, avg=14868.77, stdev=7758.19 00:19:45.539 clat percentiles (usec): 00:19:45.539 | 1.00th=[ 3589], 5.00th=[ 5997], 10.00th=[ 7898], 20.00th=[ 8717], 00:19:45.539 | 30.00th=[10159], 40.00th=[11600], 50.00th=[12911], 60.00th=[13435], 00:19:45.539 | 70.00th=[16188], 80.00th=[19268], 90.00th=[29754], 95.00th=[31589], 00:19:45.539 | 99.00th=[35914], 99.50th=[36439], 99.90th=[41157], 99.95th=[41157], 00:19:45.539 | 99.99th=[45876] 00:19:45.539 bw ( KiB/s): min=15272, max=17496, per=26.84%, avg=16384.00, stdev=1572.61, samples=2 00:19:45.539 iops : min= 3818, max= 4374, avg=4096.00, stdev=393.15, samples=2 00:19:45.539 lat (usec) : 500=0.01%, 1000=0.04% 00:19:45.540 lat (msec) : 2=0.02%, 4=1.12%, 10=17.27%, 20=62.66%, 50=18.88% 00:19:45.540 cpu : usr=5.00%, sys=5.29%, ctx=382, majf=0, minf=1 00:19:45.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:45.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:45.540 issued rwts: total=4096,4197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.540 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:45.540 00:19:45.540 Run status group 0 (all jobs): 00:19:45.540 READ: bw=55.8MiB/s (58.5MB/s), 12.9MiB/s-16.0MiB/s (13.5MB/s-16.7MB/s), io=56.1MiB (58.9MB), run=1002-1006msec 00:19:45.540 WRITE: bw=59.6MiB/s (62.5MB/s), 13.9MiB/s-16.4MiB/s (14.6MB/s-17.2MB/s), io=60.0MiB (62.9MB), run=1002-1006msec 00:19:45.540 00:19:45.540 Disk stats (read/write): 00:19:45.540 nvme0n1: ios=2799/3072, merge=0/0, ticks=25538/24258, in_queue=49796, util=90.48% 00:19:45.540 nvme0n2: ios=2612/2967, merge=0/0, ticks=13234/15959, in_queue=29193, util=95.83% 00:19:45.540 nvme0n3: ios=3136/3375, merge=0/0, ticks=22207/20550, in_queue=42757, util=99.06% 00:19:45.540 nvme0n4: ios=3369/3584, merge=0/0, ticks=37838/31535, in_queue=69373, util=94.60% 00:19:45.540 18:06:34 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:45.540 [global] 00:19:45.540 thread=1 00:19:45.540 invalidate=1 00:19:45.540 rw=randwrite 00:19:45.540 time_based=1 00:19:45.540 runtime=1 00:19:45.540 ioengine=libaio 00:19:45.540 direct=1 00:19:45.540 bs=4096 00:19:45.540 iodepth=128 00:19:45.540 norandommap=0 00:19:45.540 numjobs=1 00:19:45.540 00:19:45.540 verify_dump=1 00:19:45.540 verify_backlog=512 00:19:45.540 verify_state_save=0 00:19:45.540 do_verify=1 00:19:45.540 verify=crc32c-intel 00:19:45.540 [job0] 00:19:45.540 filename=/dev/nvme0n1 00:19:45.540 [job1] 00:19:45.540 filename=/dev/nvme0n2 00:19:45.540 [job2] 00:19:45.540 filename=/dev/nvme0n3 00:19:45.540 [job3] 00:19:45.540 filename=/dev/nvme0n4 00:19:45.540 Could not set queue depth (nvme0n1) 00:19:45.540 Could not set queue depth (nvme0n2) 00:19:45.540 Could not set queue depth (nvme0n3) 00:19:45.540 Could not set queue depth (nvme0n4) 00:19:45.540 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:45.540 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:45.540 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:45.540 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:45.540 fio-3.35 00:19:45.540 Starting 4 threads 00:19:46.944 00:19:46.944 job0: (groupid=0, jobs=1): err= 0: pid=3336243: Mon Apr 15 18:06:35 2024 00:19:46.944 read: IOPS=3913, BW=15.3MiB/s (16.0MB/s)(15.9MiB/1043msec) 00:19:46.944 slat (usec): min=3, max=33477, avg=130.40, stdev=994.14 00:19:46.944 clat (msec): min=8, max=110, avg=17.01, stdev=14.22 00:19:46.944 lat (msec): min=8, max=110, avg=17.14, stdev=14.31 00:19:46.944 clat percentiles (msec): 00:19:46.944 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 12], 00:19:46.945 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 14], 00:19:46.945 | 70.00th=[ 15], 80.00th=[ 15], 90.00th=[ 23], 95.00th=[ 46], 00:19:46.945 | 99.00th=[ 96], 99.50th=[ 111], 99.90th=[ 111], 99.95th=[ 111], 00:19:46.945 | 99.99th=[ 111] 00:19:46.945 write: IOPS=3927, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1043msec); 0 zone resets 00:19:46.945 slat (usec): min=5, max=17208, avg=105.42, stdev=611.51 00:19:46.945 clat (usec): min=3783, max=98541, avg=14702.36, stdev=9329.44 00:19:46.945 lat (usec): min=3792, max=98569, avg=14807.78, stdev=9328.53 00:19:46.945 clat percentiles (usec): 00:19:46.945 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[11338], 20.00th=[11731], 00:19:46.945 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13304], 60.00th=[13435], 00:19:46.945 | 70.00th=[13566], 80.00th=[13960], 90.00th=[16450], 95.00th=[21365], 00:19:46.945 | 99.00th=[72877], 99.50th=[98042], 99.90th=[98042], 99.95th=[98042], 00:19:46.945 | 99.99th=[98042] 00:19:46.945 bw ( KiB/s): min=13696, max=19072, per=24.59%, avg=16384.00, stdev=3801.41, samples=2 00:19:46.945 iops : min= 3424, max= 4768, avg=4096.00, stdev=950.35, samples=2 00:19:46.945 lat (msec) : 4=0.05%, 10=1.24%, 20=88.80%, 50=7.21%, 100=2.42% 00:19:46.945 lat (msec) : 250=0.28% 00:19:46.945 cpu : usr=4.22%, sys=6.05%, ctx=374, majf=0, minf=9 00:19:46.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:46.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:46.945 issued rwts: total=4082,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:46.945 job1: (groupid=0, jobs=1): err= 0: pid=3336244: Mon Apr 15 18:06:35 2024 00:19:46.945 read: IOPS=4090, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1007msec) 00:19:46.945 slat (usec): min=3, max=14297, avg=105.08, stdev=776.76 00:19:46.945 clat (usec): min=6461, max=41851, avg=14161.42, stdev=5096.21 00:19:46.945 lat (usec): min=6471, max=41868, avg=14266.50, stdev=5151.34 00:19:46.945 clat percentiles (usec): 00:19:46.945 | 1.00th=[ 7439], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[10945], 00:19:46.945 | 30.00th=[11469], 40.00th=[12256], 50.00th=[12518], 60.00th=[13566], 00:19:46.945 | 70.00th=[14484], 80.00th=[16450], 90.00th=[20055], 95.00th=[23725], 00:19:46.945 | 99.00th=[36439], 99.50th=[38536], 99.90th=[40633], 99.95th=[41681], 00:19:46.945 | 99.99th=[41681] 00:19:46.945 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:19:46.945 slat (usec): min=5, max=10608, avg=110.65, stdev=657.48 00:19:46.945 clat (usec): min=1316, max=41813, avg=15080.42, stdev=7305.13 00:19:46.945 lat (usec): min=1327, max=41824, avg=15191.07, stdev=7347.70 00:19:46.945 clat percentiles (usec): 00:19:46.945 | 1.00th=[ 4817], 5.00th=[ 7504], 10.00th=[ 7701], 20.00th=[ 8356], 00:19:46.945 | 30.00th=[ 9896], 40.00th=[12256], 50.00th=[13042], 60.00th=[15139], 00:19:46.945 | 70.00th=[16450], 80.00th=[22676], 90.00th=[27132], 95.00th=[30802], 00:19:46.945 | 99.00th=[32900], 99.50th=[33424], 99.90th=[33817], 99.95th=[41681], 00:19:46.945 | 99.99th=[41681] 00:19:46.945 bw ( KiB/s): min=16384, max=19648, per=27.04%, avg=18016.00, stdev=2308.00, samples=2 00:19:46.945 iops : min= 4096, max= 4912, avg=4504.00, stdev=577.00, samples=2 00:19:46.945 lat (msec) : 2=0.02%, 4=0.07%, 10=19.22%, 20=64.44%, 50=16.25% 00:19:46.945 cpu : usr=4.17%, sys=7.36%, ctx=371, majf=0, minf=13 00:19:46.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:46.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:46.945 issued rwts: total=4119,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:46.945 job2: (groupid=0, jobs=1): err= 0: pid=3336245: Mon Apr 15 18:06:35 2024 00:19:46.945 read: IOPS=3805, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1006msec) 00:19:46.945 slat (usec): min=3, max=22141, avg=133.68, stdev=870.56 00:19:46.945 clat (usec): min=2041, max=55728, avg=16896.15, stdev=6257.90 00:19:46.945 lat (usec): min=6170, max=55744, avg=17029.83, stdev=6302.75 00:19:46.945 clat percentiles (usec): 00:19:46.945 | 1.00th=[ 6390], 5.00th=[10814], 10.00th=[12256], 20.00th=[13698], 00:19:46.945 | 30.00th=[14484], 40.00th=[15008], 50.00th=[15401], 60.00th=[15926], 00:19:46.945 | 70.00th=[17171], 80.00th=[19792], 90.00th=[21627], 95.00th=[26608], 00:19:46.945 | 99.00th=[48497], 99.50th=[48497], 99.90th=[52691], 99.95th=[52691], 00:19:46.945 | 99.99th=[55837] 00:19:46.945 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:19:46.945 slat (usec): min=4, max=7826, avg=106.76, stdev=569.51 00:19:46.945 clat (usec): min=1335, max=48711, avg=15250.93, stdev=3720.85 00:19:46.945 lat (usec): min=1353, max=48719, avg=15357.69, stdev=3746.55 00:19:46.945 clat percentiles (usec): 00:19:46.945 | 1.00th=[ 8029], 5.00th=[10028], 10.00th=[11338], 20.00th=[13698], 00:19:46.945 | 30.00th=[14222], 40.00th=[14615], 50.00th=[15008], 60.00th=[15139], 00:19:46.945 | 70.00th=[15401], 80.00th=[16712], 90.00th=[18744], 95.00th=[22676], 00:19:46.945 | 99.00th=[32900], 99.50th=[32900], 99.90th=[34341], 99.95th=[47449], 00:19:46.945 | 99.99th=[48497] 00:19:46.945 bw ( KiB/s): min=16392, max=16408, per=24.61%, avg=16400.00, stdev=11.31, samples=2 00:19:46.945 iops : min= 4098, max= 4102, avg=4100.00, stdev= 2.83, samples=2 00:19:46.945 lat (msec) : 2=0.03%, 4=0.01%, 10=4.23%, 20=84.07%, 50=11.51% 00:19:46.945 lat (msec) : 100=0.15% 00:19:46.945 cpu : usr=3.98%, sys=4.68%, ctx=410, majf=0, minf=13 00:19:46.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:46.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:46.945 issued rwts: total=3828,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:46.945 job3: (groupid=0, jobs=1): err= 0: pid=3336246: Mon Apr 15 18:06:35 2024 00:19:46.945 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:19:46.945 slat (usec): min=2, max=13359, avg=110.07, stdev=776.21 00:19:46.945 clat (usec): min=5062, max=27744, avg=14327.57, stdev=3269.78 00:19:46.945 lat (usec): min=5068, max=27755, avg=14437.65, stdev=3324.66 00:19:46.945 clat percentiles (usec): 00:19:46.945 | 1.00th=[ 7570], 5.00th=[10683], 10.00th=[11994], 20.00th=[12518], 00:19:46.945 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13566], 60.00th=[14222], 00:19:46.945 | 70.00th=[14484], 80.00th=[14877], 90.00th=[18744], 95.00th=[22938], 00:19:46.945 | 99.00th=[25560], 99.50th=[26346], 99.90th=[27657], 99.95th=[27657], 00:19:46.945 | 99.99th=[27657] 00:19:46.945 write: IOPS=4519, BW=17.7MiB/s (18.5MB/s)(17.9MiB/1012msec); 0 zone resets 00:19:46.945 slat (usec): min=4, max=17436, avg=108.32, stdev=741.51 00:19:46.945 clat (usec): min=2205, max=81302, avg=14755.23, stdev=6394.73 00:19:46.945 lat (usec): min=2219, max=81310, avg=14863.55, stdev=6426.69 00:19:46.945 clat percentiles (usec): 00:19:46.945 | 1.00th=[ 3884], 5.00th=[ 7570], 10.00th=[ 8356], 20.00th=[12256], 00:19:46.945 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13829], 60.00th=[14746], 00:19:46.945 | 70.00th=[15008], 80.00th=[15401], 90.00th=[18744], 95.00th=[28443], 00:19:46.945 | 99.00th=[35390], 99.50th=[39060], 99.90th=[74974], 99.95th=[74974], 00:19:46.945 | 99.99th=[81265] 00:19:46.945 bw ( KiB/s): min=17424, max=18188, per=26.72%, avg=17806.00, stdev=540.23, samples=2 00:19:46.945 iops : min= 4356, max= 4547, avg=4451.50, stdev=135.06, samples=2 00:19:46.945 lat (msec) : 4=0.63%, 10=8.07%, 20=82.77%, 50=8.27%, 100=0.25% 00:19:46.945 cpu : usr=3.26%, sys=5.74%, ctx=425, majf=0, minf=15 00:19:46.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:46.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:46.945 issued rwts: total=4096,4574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:46.945 00:19:46.945 Run status group 0 (all jobs): 00:19:46.945 READ: bw=60.4MiB/s (63.3MB/s), 14.9MiB/s-16.0MiB/s (15.6MB/s-16.8MB/s), io=63.0MiB (66.0MB), run=1006-1043msec 00:19:46.945 WRITE: bw=65.1MiB/s (68.2MB/s), 15.3MiB/s-17.9MiB/s (16.1MB/s-18.7MB/s), io=67.9MiB (71.2MB), run=1006-1043msec 00:19:46.945 00:19:46.945 Disk stats (read/write): 00:19:46.945 nvme0n1: ios=3122/3584, merge=0/0, ticks=13960/15118, in_queue=29078, util=86.57% 00:19:46.945 nvme0n2: ios=3415/3584, merge=0/0, ticks=48265/55984, in_queue=104249, util=96.85% 00:19:46.945 nvme0n3: ios=3095/3455, merge=0/0, ticks=25303/22789, in_queue=48092, util=96.75% 00:19:46.945 nvme0n4: ios=3638/3623, merge=0/0, ticks=37339/30939, in_queue=68278, util=99.15% 00:19:46.945 18:06:35 -- target/fio.sh@55 -- # sync 00:19:46.945 18:06:35 -- target/fio.sh@59 -- # fio_pid=3336387 00:19:46.945 18:06:35 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:46.945 18:06:35 -- target/fio.sh@61 -- # sleep 3 00:19:46.945 [global] 00:19:46.945 thread=1 00:19:46.945 invalidate=1 00:19:46.945 rw=read 00:19:46.945 time_based=1 00:19:46.945 runtime=10 00:19:46.945 ioengine=libaio 00:19:46.945 direct=1 00:19:46.945 bs=4096 00:19:46.945 iodepth=1 00:19:46.945 norandommap=1 00:19:46.945 numjobs=1 00:19:46.945 00:19:46.945 [job0] 00:19:46.946 filename=/dev/nvme0n1 00:19:46.946 [job1] 00:19:46.946 filename=/dev/nvme0n2 00:19:46.946 [job2] 00:19:46.946 filename=/dev/nvme0n3 00:19:46.946 [job3] 00:19:46.946 filename=/dev/nvme0n4 00:19:46.946 Could not set queue depth (nvme0n1) 00:19:46.946 Could not set queue depth (nvme0n2) 00:19:46.946 Could not set queue depth (nvme0n3) 00:19:46.946 Could not set queue depth (nvme0n4) 00:19:47.204 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:47.204 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:47.204 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:47.204 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:47.204 fio-3.35 00:19:47.204 Starting 4 threads 00:19:50.486 18:06:38 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:50.486 18:06:39 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:50.486 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3485696, buflen=4096 00:19:50.486 fio: pid=3336595, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:50.486 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=14135296, buflen=4096 00:19:50.486 fio: pid=3336594, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:50.486 18:06:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:50.486 18:06:39 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:51.051 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=27328512, buflen=4096 00:19:51.051 fio: pid=3336592, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:51.051 18:06:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:51.051 18:06:39 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:51.316 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=14032896, buflen=4096 00:19:51.316 fio: pid=3336593, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:51.316 18:06:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:51.316 18:06:40 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:51.316 00:19:51.316 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3336592: Mon Apr 15 18:06:40 2024 00:19:51.316 read: IOPS=1851, BW=7403KiB/s (7581kB/s)(26.1MiB/3605msec) 00:19:51.316 slat (usec): min=5, max=15696, avg=16.35, stdev=278.09 00:19:51.316 clat (usec): min=256, max=41507, avg=521.27, stdev=2493.13 00:19:51.316 lat (usec): min=263, max=41515, avg=537.62, stdev=2509.12 00:19:51.316 clat percentiles (usec): 00:19:51.316 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 302], 00:19:51.316 | 30.00th=[ 318], 40.00th=[ 347], 50.00th=[ 367], 60.00th=[ 383], 00:19:51.316 | 70.00th=[ 396], 80.00th=[ 408], 90.00th=[ 437], 95.00th=[ 486], 00:19:51.316 | 99.00th=[ 594], 99.50th=[ 758], 99.90th=[41157], 99.95th=[41157], 00:19:51.316 | 99.99th=[41681] 00:19:51.316 bw ( KiB/s): min= 96, max=10928, per=50.36%, avg=7376.71, stdev=4315.65, samples=7 00:19:51.316 iops : min= 24, max= 2732, avg=1844.14, stdev=1078.90, samples=7 00:19:51.316 lat (usec) : 500=96.24%, 750=3.24%, 1000=0.06% 00:19:51.316 lat (msec) : 2=0.04%, 20=0.03%, 50=0.37% 00:19:51.316 cpu : usr=1.03%, sys=2.61%, ctx=6680, majf=0, minf=1 00:19:51.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.316 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.316 issued rwts: total=6673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:51.316 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3336593: Mon Apr 15 18:06:40 2024 00:19:51.316 read: IOPS=871, BW=3484KiB/s (3568kB/s)(13.4MiB/3933msec) 00:19:51.316 slat (usec): min=6, max=15902, avg=21.69, stdev=381.14 00:19:51.316 clat (usec): min=257, max=42193, avg=1123.74, stdev=5549.75 00:19:51.316 lat (usec): min=264, max=42202, avg=1145.43, stdev=5562.12 00:19:51.316 clat percentiles (usec): 00:19:51.316 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 297], 00:19:51.316 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 343], 60.00th=[ 359], 00:19:51.316 | 70.00th=[ 375], 80.00th=[ 396], 90.00th=[ 437], 95.00th=[ 498], 00:19:51.316 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:51.316 | 99.99th=[42206] 00:19:51.316 bw ( KiB/s): min= 96, max= 8248, per=25.03%, avg=3666.00, stdev=3282.06, samples=7 00:19:51.316 iops : min= 24, max= 2062, avg=916.43, stdev=820.40, samples=7 00:19:51.316 lat (usec) : 500=95.10%, 750=2.74%, 1000=0.09% 00:19:51.316 lat (msec) : 2=0.12%, 4=0.03%, 50=1.90% 00:19:51.316 cpu : usr=0.25%, sys=1.37%, ctx=3435, majf=0, minf=1 00:19:51.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.316 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.316 issued rwts: total=3427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:51.316 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3336594: Mon Apr 15 18:06:40 2024 00:19:51.316 read: IOPS=1046, BW=4184KiB/s (4285kB/s)(13.5MiB/3299msec) 00:19:51.316 slat (nsec): min=7059, max=39151, avg=9411.14, stdev=2388.79 00:19:51.316 clat (usec): min=285, max=42107, avg=944.05, stdev=4721.83 00:19:51.316 lat (usec): min=293, max=42117, avg=953.47, stdev=4722.60 00:19:51.316 clat percentiles (usec): 00:19:51.316 | 1.00th=[ 310], 5.00th=[ 326], 10.00th=[ 338], 20.00th=[ 355], 00:19:51.316 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 392], 60.00th=[ 396], 00:19:51.316 | 70.00th=[ 404], 80.00th=[ 412], 90.00th=[ 437], 95.00th=[ 474], 00:19:51.316 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:51.316 | 99.99th=[42206] 00:19:51.316 bw ( KiB/s): min= 96, max= 9856, per=27.95%, avg=4094.67, stdev=3541.54, samples=6 00:19:51.316 iops : min= 24, max= 2464, avg=1023.67, stdev=885.39, samples=6 00:19:51.316 lat (usec) : 500=96.93%, 750=1.51%, 1000=0.12% 00:19:51.316 lat (msec) : 2=0.06%, 50=1.36% 00:19:51.316 cpu : usr=0.49%, sys=1.67%, ctx=3452, majf=0, minf=1 00:19:51.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.316 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.316 issued rwts: total=3452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:51.316 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3336595: Mon Apr 15 18:06:40 2024 00:19:51.316 read: IOPS=279, BW=1118KiB/s (1144kB/s)(3404KiB/3046msec) 00:19:51.316 slat (nsec): min=6536, max=42849, avg=9071.40, stdev=4065.31 00:19:51.316 clat (usec): min=266, max=45000, avg=3566.86, stdev=11051.68 00:19:51.316 lat (usec): min=273, max=45019, avg=3575.92, stdev=11054.31 00:19:51.316 clat percentiles (usec): 00:19:51.316 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 293], 00:19:51.316 | 30.00th=[ 302], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 318], 00:19:51.316 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 396], 95.00th=[41157], 00:19:51.316 | 99.00th=[41157], 99.50th=[41157], 99.90th=[44827], 99.95th=[44827], 00:19:51.316 | 99.99th=[44827] 00:19:51.316 bw ( KiB/s): min= 96, max= 3232, per=7.73%, avg=1132.00, stdev=1306.31, samples=6 00:19:51.316 iops : min= 24, max= 808, avg=283.00, stdev=326.58, samples=6 00:19:51.316 lat (usec) : 500=91.08%, 750=0.82% 00:19:51.316 lat (msec) : 50=7.98% 00:19:51.316 cpu : usr=0.23%, sys=0.33%, ctx=852, majf=0, minf=1 00:19:51.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.316 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.316 issued rwts: total=852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:51.316 00:19:51.316 Run status group 0 (all jobs): 00:19:51.316 READ: bw=14.3MiB/s (15.0MB/s), 1118KiB/s-7403KiB/s (1144kB/s-7581kB/s), io=56.2MiB (59.0MB), run=3046-3933msec 00:19:51.316 00:19:51.316 Disk stats (read/write): 00:19:51.316 nvme0n1: ios=6640/0, merge=0/0, ticks=3396/0, in_queue=3396, util=94.79% 00:19:51.316 nvme0n2: ios=3423/0, merge=0/0, ticks=3698/0, in_queue=3698, util=95.55% 00:19:51.316 nvme0n3: ios=3073/0, merge=0/0, ticks=2999/0, in_queue=2999, util=96.69% 00:19:51.316 nvme0n4: ios=846/0, merge=0/0, ticks=2828/0, in_queue=2828, util=96.70% 00:19:51.578 18:06:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:51.578 18:06:40 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:51.835 18:06:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:51.835 18:06:40 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:52.092 18:06:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:52.092 18:06:40 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:52.350 18:06:41 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:52.350 18:06:41 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:52.915 18:06:41 -- target/fio.sh@69 -- # fio_status=0 00:19:52.915 18:06:41 -- target/fio.sh@70 -- # wait 3336387 00:19:52.915 18:06:41 -- target/fio.sh@70 -- # fio_status=4 00:19:52.915 18:06:41 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:52.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:52.915 18:06:41 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:52.915 18:06:41 -- common/autotest_common.sh@1205 -- # local i=0 00:19:52.915 18:06:41 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:52.915 18:06:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:52.915 18:06:41 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:52.915 18:06:41 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:52.915 18:06:41 -- common/autotest_common.sh@1217 -- # return 0 00:19:52.915 18:06:41 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:52.915 18:06:41 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:52.915 nvmf hotplug test: fio failed as expected 00:19:52.915 18:06:41 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:53.480 18:06:42 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:53.480 18:06:42 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:53.480 18:06:42 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:53.480 18:06:42 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:53.480 18:06:42 -- target/fio.sh@91 -- # nvmftestfini 00:19:53.480 18:06:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:53.480 18:06:42 -- nvmf/common.sh@117 -- # sync 00:19:53.480 18:06:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:53.480 18:06:42 -- nvmf/common.sh@120 -- # set +e 00:19:53.480 18:06:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.480 18:06:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:53.480 rmmod nvme_tcp 00:19:53.480 rmmod nvme_fabrics 00:19:53.480 rmmod nvme_keyring 00:19:53.480 18:06:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.480 18:06:42 -- nvmf/common.sh@124 -- # set -e 00:19:53.480 18:06:42 -- nvmf/common.sh@125 -- # return 0 00:19:53.480 18:06:42 -- nvmf/common.sh@478 -- # '[' -n 3334344 ']' 00:19:53.480 18:06:42 -- nvmf/common.sh@479 -- # killprocess 3334344 00:19:53.480 18:06:42 -- common/autotest_common.sh@936 -- # '[' -z 3334344 ']' 00:19:53.480 18:06:42 -- common/autotest_common.sh@940 -- # kill -0 3334344 00:19:53.480 18:06:42 -- common/autotest_common.sh@941 -- # uname 00:19:53.480 18:06:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:53.480 18:06:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3334344 00:19:53.480 18:06:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:53.480 18:06:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:53.480 18:06:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3334344' 00:19:53.480 killing process with pid 3334344 00:19:53.480 18:06:42 -- common/autotest_common.sh@955 -- # kill 3334344 00:19:53.480 18:06:42 -- common/autotest_common.sh@960 -- # wait 3334344 00:19:53.738 18:06:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:53.738 18:06:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:53.738 18:06:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:53.738 18:06:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.738 18:06:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:53.738 18:06:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.738 18:06:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.738 18:06:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.265 18:06:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:56.265 00:19:56.265 real 0m25.919s 00:19:56.265 user 1m33.769s 00:19:56.265 sys 0m6.993s 00:19:56.265 18:06:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:56.265 18:06:44 -- common/autotest_common.sh@10 -- # set +x 00:19:56.265 ************************************ 00:19:56.265 END TEST nvmf_fio_target 00:19:56.265 ************************************ 00:19:56.265 18:06:44 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:56.265 18:06:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:56.265 18:06:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:56.265 18:06:44 -- common/autotest_common.sh@10 -- # set +x 00:19:56.265 ************************************ 00:19:56.265 START TEST nvmf_bdevio 00:19:56.265 ************************************ 00:19:56.265 18:06:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:56.265 * Looking for test storage... 00:19:56.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:56.265 18:06:44 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.265 18:06:44 -- nvmf/common.sh@7 -- # uname -s 00:19:56.265 18:06:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.265 18:06:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.265 18:06:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.265 18:06:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.265 18:06:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.265 18:06:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.265 18:06:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.265 18:06:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.265 18:06:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.265 18:06:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.265 18:06:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:56.265 18:06:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:56.265 18:06:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.265 18:06:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.265 18:06:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.265 18:06:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.265 18:06:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.265 18:06:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.265 18:06:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.265 18:06:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.265 18:06:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.265 18:06:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.265 18:06:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.265 18:06:44 -- paths/export.sh@5 -- # export PATH 00:19:56.265 18:06:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.265 18:06:44 -- nvmf/common.sh@47 -- # : 0 00:19:56.265 18:06:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:56.266 18:06:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:56.266 18:06:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.266 18:06:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.266 18:06:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.266 18:06:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:56.266 18:06:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:56.266 18:06:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:56.266 18:06:44 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:56.266 18:06:44 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:56.266 18:06:44 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:56.266 18:06:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:56.266 18:06:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.266 18:06:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:56.266 18:06:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:56.266 18:06:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:56.266 18:06:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.266 18:06:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.266 18:06:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.266 18:06:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:56.266 18:06:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:56.266 18:06:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:56.266 18:06:44 -- common/autotest_common.sh@10 -- # set +x 00:19:58.166 18:06:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:58.166 18:06:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:58.166 18:06:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:58.166 18:06:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:58.166 18:06:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:58.166 18:06:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:58.166 18:06:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:58.166 18:06:47 -- nvmf/common.sh@295 -- # net_devs=() 00:19:58.166 18:06:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:58.166 18:06:47 -- nvmf/common.sh@296 -- # e810=() 00:19:58.166 18:06:47 -- nvmf/common.sh@296 -- # local -ga e810 00:19:58.166 18:06:47 -- nvmf/common.sh@297 -- # x722=() 00:19:58.166 18:06:47 -- nvmf/common.sh@297 -- # local -ga x722 00:19:58.166 18:06:47 -- nvmf/common.sh@298 -- # mlx=() 00:19:58.166 18:06:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:58.166 18:06:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.166 18:06:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.166 18:06:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.166 18:06:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.166 18:06:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.166 18:06:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.166 18:06:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.166 18:06:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.166 18:06:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.166 18:06:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.166 18:06:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.166 18:06:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:58.166 18:06:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:58.166 18:06:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:58.166 18:06:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:58.166 18:06:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:58.166 18:06:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:58.166 18:06:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.166 18:06:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:58.166 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:58.166 18:06:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.166 18:06:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.166 18:06:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.166 18:06:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.166 18:06:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.166 18:06:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.166 18:06:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:58.166 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:58.166 18:06:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.166 18:06:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.166 18:06:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.166 18:06:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.166 18:06:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.166 18:06:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:58.166 18:06:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:58.166 18:06:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:58.166 18:06:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.166 18:06:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.166 18:06:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:58.166 18:06:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.166 18:06:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:58.166 Found net devices under 0000:84:00.0: cvl_0_0 00:19:58.166 18:06:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.166 18:06:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.166 18:06:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.166 18:06:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:58.166 18:06:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.166 18:06:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:58.166 Found net devices under 0000:84:00.1: cvl_0_1 00:19:58.166 18:06:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.166 18:06:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:58.166 18:06:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:58.166 18:06:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:58.166 18:06:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:58.166 18:06:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:58.166 18:06:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.166 18:06:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.166 18:06:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.166 18:06:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:58.166 18:06:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.166 18:06:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.166 18:06:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:58.166 18:06:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.166 18:06:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.166 18:06:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:58.166 18:06:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:58.166 18:06:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.166 18:06:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.425 18:06:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.425 18:06:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.425 18:06:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:58.425 18:06:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.425 18:06:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.425 18:06:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.425 18:06:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:58.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:19:58.425 00:19:58.425 --- 10.0.0.2 ping statistics --- 00:19:58.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.425 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:19:58.425 18:06:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:58.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:19:58.425 00:19:58.425 --- 10.0.0.1 ping statistics --- 00:19:58.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.425 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:19:58.425 18:06:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.425 18:06:47 -- nvmf/common.sh@411 -- # return 0 00:19:58.425 18:06:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:58.425 18:06:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.425 18:06:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:58.425 18:06:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:58.425 18:06:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.425 18:06:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:58.425 18:06:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:58.425 18:06:47 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:58.425 18:06:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:58.425 18:06:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:58.425 18:06:47 -- common/autotest_common.sh@10 -- # set +x 00:19:58.425 18:06:47 -- nvmf/common.sh@470 -- # nvmfpid=3339348 00:19:58.425 18:06:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:58.425 18:06:47 -- nvmf/common.sh@471 -- # waitforlisten 3339348 00:19:58.425 18:06:47 -- common/autotest_common.sh@817 -- # '[' -z 3339348 ']' 00:19:58.425 18:06:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.425 18:06:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:58.425 18:06:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.425 18:06:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:58.425 18:06:47 -- common/autotest_common.sh@10 -- # set +x 00:19:58.425 [2024-04-15 18:06:47.320140] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:19:58.425 [2024-04-15 18:06:47.320242] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.425 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.683 [2024-04-15 18:06:47.403946] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:58.683 [2024-04-15 18:06:47.500923] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.683 [2024-04-15 18:06:47.500991] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.683 [2024-04-15 18:06:47.501009] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.683 [2024-04-15 18:06:47.501025] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.683 [2024-04-15 18:06:47.501038] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.683 [2024-04-15 18:06:47.501138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:58.683 [2024-04-15 18:06:47.501194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:58.683 [2024-04-15 18:06:47.501246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:58.683 [2024-04-15 18:06:47.501249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:58.942 18:06:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:58.942 18:06:47 -- common/autotest_common.sh@850 -- # return 0 00:19:58.942 18:06:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:58.942 18:06:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:58.942 18:06:47 -- common/autotest_common.sh@10 -- # set +x 00:19:58.942 18:06:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.942 18:06:47 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:58.942 18:06:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.942 18:06:47 -- common/autotest_common.sh@10 -- # set +x 00:19:58.942 [2024-04-15 18:06:47.675043] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.942 18:06:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.942 18:06:47 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:58.942 18:06:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.942 18:06:47 -- common/autotest_common.sh@10 -- # set +x 00:19:58.942 Malloc0 00:19:58.942 18:06:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.942 18:06:47 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:58.942 18:06:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.942 18:06:47 -- common/autotest_common.sh@10 -- # set +x 00:19:58.942 18:06:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.942 18:06:47 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:58.942 18:06:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.942 18:06:47 -- common/autotest_common.sh@10 -- # set +x 00:19:58.942 18:06:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.942 18:06:47 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.942 18:06:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.942 18:06:47 -- common/autotest_common.sh@10 -- # set +x 00:19:58.942 [2024-04-15 18:06:47.729653] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.942 18:06:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.942 18:06:47 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:58.942 18:06:47 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:58.942 18:06:47 -- nvmf/common.sh@521 -- # config=() 00:19:58.942 18:06:47 -- nvmf/common.sh@521 -- # local subsystem config 00:19:58.942 18:06:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:58.942 18:06:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:58.942 { 00:19:58.942 "params": { 00:19:58.942 "name": "Nvme$subsystem", 00:19:58.942 "trtype": "$TEST_TRANSPORT", 00:19:58.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.942 "adrfam": "ipv4", 00:19:58.942 "trsvcid": "$NVMF_PORT", 00:19:58.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.942 "hdgst": ${hdgst:-false}, 00:19:58.942 "ddgst": ${ddgst:-false} 00:19:58.942 }, 00:19:58.942 "method": "bdev_nvme_attach_controller" 00:19:58.942 } 00:19:58.942 EOF 00:19:58.942 )") 00:19:58.942 18:06:47 -- nvmf/common.sh@543 -- # cat 00:19:58.942 18:06:47 -- nvmf/common.sh@545 -- # jq . 00:19:58.942 18:06:47 -- nvmf/common.sh@546 -- # IFS=, 00:19:58.942 18:06:47 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:58.942 "params": { 00:19:58.942 "name": "Nvme1", 00:19:58.942 "trtype": "tcp", 00:19:58.942 "traddr": "10.0.0.2", 00:19:58.942 "adrfam": "ipv4", 00:19:58.942 "trsvcid": "4420", 00:19:58.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.942 "hdgst": false, 00:19:58.942 "ddgst": false 00:19:58.942 }, 00:19:58.942 "method": "bdev_nvme_attach_controller" 00:19:58.942 }' 00:19:58.942 [2024-04-15 18:06:47.780973] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:19:58.942 [2024-04-15 18:06:47.781056] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339398 ] 00:19:58.942 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.942 [2024-04-15 18:06:47.882888] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:59.201 [2024-04-15 18:06:47.981294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.201 [2024-04-15 18:06:47.981349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.201 [2024-04-15 18:06:47.981352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.201 [2024-04-15 18:06:47.990250] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:19:59.459 I/O targets: 00:19:59.459 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:59.459 00:19:59.459 00:19:59.459 CUnit - A unit testing framework for C - Version 2.1-3 00:19:59.459 http://cunit.sourceforge.net/ 00:19:59.459 00:19:59.459 00:19:59.459 Suite: bdevio tests on: Nvme1n1 00:19:59.459 Test: blockdev write read block ...passed 00:19:59.459 Test: blockdev write zeroes read block ...passed 00:19:59.460 Test: blockdev write zeroes read no split ...passed 00:19:59.718 Test: blockdev write zeroes read split ...passed 00:19:59.718 Test: blockdev write zeroes read split partial ...passed 00:19:59.718 Test: blockdev reset ...[2024-04-15 18:06:48.500548] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:59.718 [2024-04-15 18:06:48.500682] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2282d00 (9): Bad file descriptor 00:19:59.718 [2024-04-15 18:06:48.512202] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:59.718 passed 00:19:59.718 Test: blockdev write read 8 blocks ...passed 00:19:59.718 Test: blockdev write read size > 128k ...passed 00:19:59.718 Test: blockdev write read invalid size ...passed 00:19:59.718 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:59.718 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:59.718 Test: blockdev write read max offset ...passed 00:19:59.976 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:59.976 Test: blockdev writev readv 8 blocks ...passed 00:19:59.976 Test: blockdev writev readv 30 x 1block ...passed 00:19:59.976 Test: blockdev writev readv block ...passed 00:19:59.976 Test: blockdev writev readv size > 128k ...passed 00:19:59.976 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:59.976 Test: blockdev comparev and writev ...[2024-04-15 18:06:48.811256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.976 [2024-04-15 18:06:48.811299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.976 [2024-04-15 18:06:48.811336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.976 [2024-04-15 18:06:48.811359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:59.976 [2024-04-15 18:06:48.811897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.976 [2024-04-15 18:06:48.811924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:59.976 [2024-04-15 18:06:48.811952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.976 [2024-04-15 18:06:48.811975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:59.976 [2024-04-15 18:06:48.812553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.976 [2024-04-15 18:06:48.812582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:59.976 [2024-04-15 18:06:48.812607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.976 [2024-04-15 18:06:48.812626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:59.976 [2024-04-15 18:06:48.813106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.976 [2024-04-15 18:06:48.813135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:59.976 [2024-04-15 18:06:48.813161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.976 [2024-04-15 18:06:48.813179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:59.976 passed 00:19:59.976 Test: blockdev nvme passthru rw ...passed 00:19:59.976 Test: blockdev nvme passthru vendor specific ...[2024-04-15 18:06:48.897601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:59.976 [2024-04-15 18:06:48.897635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:59.976 [2024-04-15 18:06:48.897942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:59.976 [2024-04-15 18:06:48.897969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:59.976 [2024-04-15 18:06:48.898183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:59.976 [2024-04-15 18:06:48.898211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:59.976 [2024-04-15 18:06:48.898421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:59.976 [2024-04-15 18:06:48.898448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:59.976 passed 00:19:59.976 Test: blockdev nvme admin passthru ...passed 00:20:00.235 Test: blockdev copy ...passed 00:20:00.235 00:20:00.235 Run Summary: Type Total Ran Passed Failed Inactive 00:20:00.235 suites 1 1 n/a 0 0 00:20:00.235 tests 23 23 23 0 0 00:20:00.235 asserts 152 152 152 0 n/a 00:20:00.235 00:20:00.235 Elapsed time = 1.324 seconds 00:20:00.235 18:06:49 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:00.235 18:06:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.235 18:06:49 -- common/autotest_common.sh@10 -- # set +x 00:20:00.235 18:06:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.235 18:06:49 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:00.235 18:06:49 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:00.235 18:06:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:00.235 18:06:49 -- nvmf/common.sh@117 -- # sync 00:20:00.235 18:06:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:00.235 18:06:49 -- nvmf/common.sh@120 -- # set +e 00:20:00.235 18:06:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:00.235 18:06:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:00.235 rmmod nvme_tcp 00:20:00.492 rmmod nvme_fabrics 00:20:00.492 rmmod nvme_keyring 00:20:00.492 18:06:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:00.493 18:06:49 -- nvmf/common.sh@124 -- # set -e 00:20:00.493 18:06:49 -- nvmf/common.sh@125 -- # return 0 00:20:00.493 18:06:49 -- nvmf/common.sh@478 -- # '[' -n 3339348 ']' 00:20:00.493 18:06:49 -- nvmf/common.sh@479 -- # killprocess 3339348 00:20:00.493 18:06:49 -- common/autotest_common.sh@936 -- # '[' -z 3339348 ']' 00:20:00.493 18:06:49 -- common/autotest_common.sh@940 -- # kill -0 3339348 00:20:00.493 18:06:49 -- common/autotest_common.sh@941 -- # uname 00:20:00.493 18:06:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:00.493 18:06:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3339348 00:20:00.493 18:06:49 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:20:00.493 18:06:49 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:20:00.493 18:06:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3339348' 00:20:00.493 killing process with pid 3339348 00:20:00.493 18:06:49 -- common/autotest_common.sh@955 -- # kill 3339348 00:20:00.493 18:06:49 -- common/autotest_common.sh@960 -- # wait 3339348 00:20:00.780 18:06:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:00.780 18:06:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:00.780 18:06:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:00.780 18:06:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:00.780 18:06:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:00.780 18:06:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.780 18:06:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.780 18:06:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.683 18:06:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:02.683 00:20:02.683 real 0m6.883s 00:20:02.683 user 0m11.600s 00:20:02.683 sys 0m2.422s 00:20:02.683 18:06:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:02.683 18:06:51 -- common/autotest_common.sh@10 -- # set +x 00:20:02.683 ************************************ 00:20:02.683 END TEST nvmf_bdevio 00:20:02.683 ************************************ 00:20:02.684 18:06:51 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:20:02.684 18:06:51 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:02.684 18:06:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:20:02.684 18:06:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:02.684 18:06:51 -- common/autotest_common.sh@10 -- # set +x 00:20:02.942 ************************************ 00:20:02.942 START TEST nvmf_bdevio_no_huge 00:20:02.942 ************************************ 00:20:02.942 18:06:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:02.942 * Looking for test storage... 00:20:02.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:02.942 18:06:51 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.942 18:06:51 -- nvmf/common.sh@7 -- # uname -s 00:20:02.942 18:06:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.942 18:06:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.942 18:06:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.942 18:06:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.942 18:06:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.942 18:06:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.942 18:06:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.942 18:06:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.942 18:06:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.942 18:06:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.942 18:06:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:02.942 18:06:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:02.942 18:06:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.942 18:06:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.942 18:06:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:02.942 18:06:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.942 18:06:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:02.942 18:06:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.942 18:06:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.942 18:06:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.942 18:06:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.942 18:06:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.942 18:06:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.942 18:06:51 -- paths/export.sh@5 -- # export PATH 00:20:02.942 18:06:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.942 18:06:51 -- nvmf/common.sh@47 -- # : 0 00:20:02.942 18:06:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:02.942 18:06:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:02.942 18:06:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:02.942 18:06:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.942 18:06:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.942 18:06:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:02.942 18:06:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:02.942 18:06:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:02.943 18:06:51 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:02.943 18:06:51 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:02.943 18:06:51 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:02.943 18:06:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:02.943 18:06:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.943 18:06:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:02.943 18:06:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:02.943 18:06:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:02.943 18:06:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.943 18:06:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.943 18:06:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.943 18:06:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:02.943 18:06:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:02.943 18:06:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:02.943 18:06:51 -- common/autotest_common.sh@10 -- # set +x 00:20:05.474 18:06:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:05.474 18:06:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:05.474 18:06:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:05.474 18:06:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:05.474 18:06:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:05.474 18:06:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:05.474 18:06:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:05.474 18:06:54 -- nvmf/common.sh@295 -- # net_devs=() 00:20:05.474 18:06:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:05.474 18:06:54 -- nvmf/common.sh@296 -- # e810=() 00:20:05.474 18:06:54 -- nvmf/common.sh@296 -- # local -ga e810 00:20:05.474 18:06:54 -- nvmf/common.sh@297 -- # x722=() 00:20:05.474 18:06:54 -- nvmf/common.sh@297 -- # local -ga x722 00:20:05.474 18:06:54 -- nvmf/common.sh@298 -- # mlx=() 00:20:05.474 18:06:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:05.474 18:06:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.474 18:06:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.474 18:06:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.474 18:06:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.474 18:06:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.474 18:06:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.474 18:06:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.474 18:06:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.474 18:06:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.474 18:06:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.474 18:06:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.474 18:06:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:05.474 18:06:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:05.474 18:06:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:05.474 18:06:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:05.474 18:06:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:05.474 18:06:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:05.474 18:06:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:05.474 18:06:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:05.474 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:05.474 18:06:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:05.474 18:06:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:05.474 18:06:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.474 18:06:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.474 18:06:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:05.474 18:06:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:05.474 18:06:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:05.474 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:05.474 18:06:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:05.474 18:06:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:05.474 18:06:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.474 18:06:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.474 18:06:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:05.474 18:06:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:05.474 18:06:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:05.474 18:06:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:05.474 18:06:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:05.474 18:06:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.474 18:06:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:05.474 18:06:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.474 18:06:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:05.474 Found net devices under 0000:84:00.0: cvl_0_0 00:20:05.474 18:06:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.474 18:06:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:05.474 18:06:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.474 18:06:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:05.474 18:06:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.474 18:06:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:05.474 Found net devices under 0000:84:00.1: cvl_0_1 00:20:05.474 18:06:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.474 18:06:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:05.474 18:06:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:05.474 18:06:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:05.474 18:06:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:05.474 18:06:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:05.474 18:06:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:05.474 18:06:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:05.474 18:06:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:05.474 18:06:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:05.474 18:06:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:05.474 18:06:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:05.474 18:06:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:05.474 18:06:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:05.474 18:06:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:05.474 18:06:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:05.474 18:06:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:05.474 18:06:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:05.474 18:06:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:05.474 18:06:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:05.474 18:06:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:05.474 18:06:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:05.475 18:06:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:05.475 18:06:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:05.475 18:06:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:05.475 18:06:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:05.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:05.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:20:05.475 00:20:05.475 --- 10.0.0.2 ping statistics --- 00:20:05.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.475 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:20:05.475 18:06:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:05.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:05.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:20:05.475 00:20:05.475 --- 10.0.0.1 ping statistics --- 00:20:05.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.475 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:20:05.475 18:06:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:05.475 18:06:54 -- nvmf/common.sh@411 -- # return 0 00:20:05.475 18:06:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:05.475 18:06:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.475 18:06:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:05.475 18:06:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:05.475 18:06:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.475 18:06:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:05.475 18:06:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:05.475 18:06:54 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:05.475 18:06:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:05.475 18:06:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:05.475 18:06:54 -- common/autotest_common.sh@10 -- # set +x 00:20:05.475 18:06:54 -- nvmf/common.sh@470 -- # nvmfpid=3341604 00:20:05.475 18:06:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:05.475 18:06:54 -- nvmf/common.sh@471 -- # waitforlisten 3341604 00:20:05.475 18:06:54 -- common/autotest_common.sh@817 -- # '[' -z 3341604 ']' 00:20:05.475 18:06:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.475 18:06:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:05.475 18:06:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.475 18:06:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:05.475 18:06:54 -- common/autotest_common.sh@10 -- # set +x 00:20:05.475 [2024-04-15 18:06:54.286179] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:20:05.475 [2024-04-15 18:06:54.286273] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:05.475 [2024-04-15 18:06:54.378419] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:05.732 [2024-04-15 18:06:54.470719] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.732 [2024-04-15 18:06:54.470788] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.732 [2024-04-15 18:06:54.470805] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.732 [2024-04-15 18:06:54.470820] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.732 [2024-04-15 18:06:54.470833] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.732 [2024-04-15 18:06:54.470920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:05.732 [2024-04-15 18:06:54.470975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:05.732 [2024-04-15 18:06:54.471025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:05.732 [2024-04-15 18:06:54.471027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:05.732 18:06:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:05.732 18:06:54 -- common/autotest_common.sh@850 -- # return 0 00:20:05.732 18:06:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:05.732 18:06:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:05.732 18:06:54 -- common/autotest_common.sh@10 -- # set +x 00:20:05.732 18:06:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.732 18:06:54 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:05.732 18:06:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.732 18:06:54 -- common/autotest_common.sh@10 -- # set +x 00:20:05.732 [2024-04-15 18:06:54.590797] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.732 18:06:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.733 18:06:54 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:05.733 18:06:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.733 18:06:54 -- common/autotest_common.sh@10 -- # set +x 00:20:05.733 Malloc0 00:20:05.733 18:06:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.733 18:06:54 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:05.733 18:06:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.733 18:06:54 -- common/autotest_common.sh@10 -- # set +x 00:20:05.733 18:06:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.733 18:06:54 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:05.733 18:06:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.733 18:06:54 -- common/autotest_common.sh@10 -- # set +x 00:20:05.733 18:06:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.733 18:06:54 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:05.733 18:06:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.733 18:06:54 -- common/autotest_common.sh@10 -- # set +x 00:20:05.733 [2024-04-15 18:06:54.629566] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.733 18:06:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.733 18:06:54 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:05.733 18:06:54 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:05.733 18:06:54 -- nvmf/common.sh@521 -- # config=() 00:20:05.733 18:06:54 -- nvmf/common.sh@521 -- # local subsystem config 00:20:05.733 18:06:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:05.733 18:06:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:05.733 { 00:20:05.733 "params": { 00:20:05.733 "name": "Nvme$subsystem", 00:20:05.733 "trtype": "$TEST_TRANSPORT", 00:20:05.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.733 "adrfam": "ipv4", 00:20:05.733 "trsvcid": "$NVMF_PORT", 00:20:05.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.733 "hdgst": ${hdgst:-false}, 00:20:05.733 "ddgst": ${ddgst:-false} 00:20:05.733 }, 00:20:05.733 "method": "bdev_nvme_attach_controller" 00:20:05.733 } 00:20:05.733 EOF 00:20:05.733 )") 00:20:05.733 18:06:54 -- nvmf/common.sh@543 -- # cat 00:20:05.733 18:06:54 -- nvmf/common.sh@545 -- # jq . 00:20:05.733 18:06:54 -- nvmf/common.sh@546 -- # IFS=, 00:20:05.733 18:06:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:05.733 "params": { 00:20:05.733 "name": "Nvme1", 00:20:05.733 "trtype": "tcp", 00:20:05.733 "traddr": "10.0.0.2", 00:20:05.733 "adrfam": "ipv4", 00:20:05.733 "trsvcid": "4420", 00:20:05.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:05.733 "hdgst": false, 00:20:05.733 "ddgst": false 00:20:05.733 }, 00:20:05.733 "method": "bdev_nvme_attach_controller" 00:20:05.733 }' 00:20:05.733 [2024-04-15 18:06:54.680191] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:20:05.733 [2024-04-15 18:06:54.680287] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3341640 ] 00:20:05.990 [2024-04-15 18:06:54.759733] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:05.991 [2024-04-15 18:06:54.852030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.991 [2024-04-15 18:06:54.852086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.991 [2024-04-15 18:06:54.852090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.991 [2024-04-15 18:06:54.860990] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:20:06.248 I/O targets: 00:20:06.248 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:06.248 00:20:06.248 00:20:06.248 CUnit - A unit testing framework for C - Version 2.1-3 00:20:06.248 http://cunit.sourceforge.net/ 00:20:06.248 00:20:06.248 00:20:06.248 Suite: bdevio tests on: Nvme1n1 00:20:06.248 Test: blockdev write read block ...passed 00:20:06.505 Test: blockdev write zeroes read block ...passed 00:20:06.505 Test: blockdev write zeroes read no split ...passed 00:20:06.505 Test: blockdev write zeroes read split ...passed 00:20:06.505 Test: blockdev write zeroes read split partial ...passed 00:20:06.505 Test: blockdev reset ...[2024-04-15 18:06:55.349785] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:06.505 [2024-04-15 18:06:55.349918] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1438570 (9): Bad file descriptor 00:20:06.505 [2024-04-15 18:06:55.404056] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:06.505 passed 00:20:06.505 Test: blockdev write read 8 blocks ...passed 00:20:06.505 Test: blockdev write read size > 128k ...passed 00:20:06.505 Test: blockdev write read invalid size ...passed 00:20:06.762 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:06.762 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:06.762 Test: blockdev write read max offset ...passed 00:20:06.762 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:06.762 Test: blockdev writev readv 8 blocks ...passed 00:20:06.762 Test: blockdev writev readv 30 x 1block ...passed 00:20:06.762 Test: blockdev writev readv block ...passed 00:20:06.762 Test: blockdev writev readv size > 128k ...passed 00:20:06.762 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:06.762 Test: blockdev comparev and writev ...[2024-04-15 18:06:55.663596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.762 [2024-04-15 18:06:55.663640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.762 [2024-04-15 18:06:55.663669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.762 [2024-04-15 18:06:55.663689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:06.762 [2024-04-15 18:06:55.664167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.762 [2024-04-15 18:06:55.664203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:06.762 [2024-04-15 18:06:55.664231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.762 [2024-04-15 18:06:55.664251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:06.762 [2024-04-15 18:06:55.664678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.762 [2024-04-15 18:06:55.664707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:06.762 [2024-04-15 18:06:55.664732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.762 [2024-04-15 18:06:55.664751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:06.763 [2024-04-15 18:06:55.665206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.763 [2024-04-15 18:06:55.665234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:06.763 [2024-04-15 18:06:55.665260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.763 [2024-04-15 18:06:55.665278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:06.763 passed 00:20:07.020 Test: blockdev nvme passthru rw ...passed 00:20:07.020 Test: blockdev nvme passthru vendor specific ...[2024-04-15 18:06:55.747506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:07.020 [2024-04-15 18:06:55.747540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:07.020 [2024-04-15 18:06:55.747866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:07.020 [2024-04-15 18:06:55.747892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:07.020 [2024-04-15 18:06:55.748182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:07.020 [2024-04-15 18:06:55.748211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:07.020 [2024-04-15 18:06:55.748499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:07.020 [2024-04-15 18:06:55.748526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:07.020 passed 00:20:07.020 Test: blockdev nvme admin passthru ...passed 00:20:07.020 Test: blockdev copy ...passed 00:20:07.020 00:20:07.020 Run Summary: Type Total Ran Passed Failed Inactive 00:20:07.020 suites 1 1 n/a 0 0 00:20:07.020 tests 23 23 23 0 0 00:20:07.020 asserts 152 152 152 0 n/a 00:20:07.020 00:20:07.020 Elapsed time = 1.333 seconds 00:20:07.278 18:06:56 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:07.278 18:06:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.278 18:06:56 -- common/autotest_common.sh@10 -- # set +x 00:20:07.278 18:06:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.278 18:06:56 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:07.278 18:06:56 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:07.278 18:06:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:07.278 18:06:56 -- nvmf/common.sh@117 -- # sync 00:20:07.278 18:06:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:07.278 18:06:56 -- nvmf/common.sh@120 -- # set +e 00:20:07.278 18:06:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:07.278 18:06:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:07.278 rmmod nvme_tcp 00:20:07.278 rmmod nvme_fabrics 00:20:07.278 rmmod nvme_keyring 00:20:07.278 18:06:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:07.278 18:06:56 -- nvmf/common.sh@124 -- # set -e 00:20:07.278 18:06:56 -- nvmf/common.sh@125 -- # return 0 00:20:07.278 18:06:56 -- nvmf/common.sh@478 -- # '[' -n 3341604 ']' 00:20:07.278 18:06:56 -- nvmf/common.sh@479 -- # killprocess 3341604 00:20:07.278 18:06:56 -- common/autotest_common.sh@936 -- # '[' -z 3341604 ']' 00:20:07.278 18:06:56 -- common/autotest_common.sh@940 -- # kill -0 3341604 00:20:07.278 18:06:56 -- common/autotest_common.sh@941 -- # uname 00:20:07.536 18:06:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:07.536 18:06:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3341604 00:20:07.536 18:06:56 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:20:07.536 18:06:56 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:20:07.536 18:06:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3341604' 00:20:07.536 killing process with pid 3341604 00:20:07.536 18:06:56 -- common/autotest_common.sh@955 -- # kill 3341604 00:20:07.536 18:06:56 -- common/autotest_common.sh@960 -- # wait 3341604 00:20:07.793 18:06:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:07.793 18:06:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:07.794 18:06:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:07.794 18:06:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:07.794 18:06:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:07.794 18:06:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.794 18:06:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.794 18:06:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.323 18:06:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:10.323 00:20:10.323 real 0m6.985s 00:20:10.323 user 0m11.859s 00:20:10.323 sys 0m2.871s 00:20:10.323 18:06:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:10.323 18:06:58 -- common/autotest_common.sh@10 -- # set +x 00:20:10.323 ************************************ 00:20:10.323 END TEST nvmf_bdevio_no_huge 00:20:10.323 ************************************ 00:20:10.323 18:06:58 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:10.323 18:06:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:10.323 18:06:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:10.323 18:06:58 -- common/autotest_common.sh@10 -- # set +x 00:20:10.323 ************************************ 00:20:10.323 START TEST nvmf_tls 00:20:10.323 ************************************ 00:20:10.323 18:06:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:10.323 * Looking for test storage... 00:20:10.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:10.323 18:06:58 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:10.323 18:06:58 -- nvmf/common.sh@7 -- # uname -s 00:20:10.323 18:06:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.323 18:06:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.323 18:06:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.323 18:06:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.323 18:06:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.323 18:06:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.323 18:06:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.323 18:06:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.323 18:06:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.323 18:06:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.323 18:06:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:10.323 18:06:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:10.323 18:06:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.323 18:06:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.323 18:06:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:10.323 18:06:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.323 18:06:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:10.323 18:06:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.323 18:06:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.323 18:06:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.323 18:06:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.323 18:06:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.323 18:06:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.323 18:06:58 -- paths/export.sh@5 -- # export PATH 00:20:10.323 18:06:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.323 18:06:58 -- nvmf/common.sh@47 -- # : 0 00:20:10.323 18:06:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:10.323 18:06:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:10.323 18:06:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.323 18:06:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.323 18:06:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.324 18:06:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:10.324 18:06:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:10.324 18:06:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:10.324 18:06:58 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:10.324 18:06:59 -- target/tls.sh@62 -- # nvmftestinit 00:20:10.324 18:06:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:10.324 18:06:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.324 18:06:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:10.324 18:06:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:10.324 18:06:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:10.324 18:06:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.324 18:06:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.324 18:06:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.324 18:06:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:10.324 18:06:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:10.324 18:06:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:10.324 18:06:59 -- common/autotest_common.sh@10 -- # set +x 00:20:12.851 18:07:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:12.851 18:07:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:12.851 18:07:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:12.851 18:07:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:12.851 18:07:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:12.851 18:07:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:12.851 18:07:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:12.851 18:07:01 -- nvmf/common.sh@295 -- # net_devs=() 00:20:12.851 18:07:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:12.851 18:07:01 -- nvmf/common.sh@296 -- # e810=() 00:20:12.851 18:07:01 -- nvmf/common.sh@296 -- # local -ga e810 00:20:12.852 18:07:01 -- nvmf/common.sh@297 -- # x722=() 00:20:12.852 18:07:01 -- nvmf/common.sh@297 -- # local -ga x722 00:20:12.852 18:07:01 -- nvmf/common.sh@298 -- # mlx=() 00:20:12.852 18:07:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:12.852 18:07:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.852 18:07:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.852 18:07:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.852 18:07:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.852 18:07:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.852 18:07:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.852 18:07:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.852 18:07:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.852 18:07:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.852 18:07:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.852 18:07:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.852 18:07:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:12.852 18:07:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:12.852 18:07:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:12.852 18:07:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.852 18:07:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:12.852 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:12.852 18:07:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.852 18:07:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:12.852 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:12.852 18:07:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:12.852 18:07:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.852 18:07:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.852 18:07:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:12.852 18:07:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.852 18:07:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:12.852 Found net devices under 0000:84:00.0: cvl_0_0 00:20:12.852 18:07:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.852 18:07:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.852 18:07:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.852 18:07:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:12.852 18:07:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.852 18:07:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:12.852 Found net devices under 0000:84:00.1: cvl_0_1 00:20:12.852 18:07:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.852 18:07:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:12.852 18:07:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:12.852 18:07:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:12.852 18:07:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.852 18:07:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.852 18:07:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:12.852 18:07:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:12.852 18:07:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:12.852 18:07:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:12.852 18:07:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:12.852 18:07:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:12.852 18:07:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.852 18:07:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:12.852 18:07:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:12.852 18:07:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:12.852 18:07:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:12.852 18:07:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:12.852 18:07:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:12.852 18:07:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:12.852 18:07:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:12.852 18:07:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:12.852 18:07:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:12.852 18:07:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:12.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:20:12.852 00:20:12.852 --- 10.0.0.2 ping statistics --- 00:20:12.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.852 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:20:12.852 18:07:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:12.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:20:12.852 00:20:12.852 --- 10.0.0.1 ping statistics --- 00:20:12.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.852 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:12.852 18:07:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.852 18:07:01 -- nvmf/common.sh@411 -- # return 0 00:20:12.852 18:07:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:12.852 18:07:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.852 18:07:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:12.852 18:07:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.852 18:07:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:12.852 18:07:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:12.852 18:07:01 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:12.852 18:07:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:12.852 18:07:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:12.852 18:07:01 -- common/autotest_common.sh@10 -- # set +x 00:20:12.852 18:07:01 -- nvmf/common.sh@470 -- # nvmfpid=3343858 00:20:12.852 18:07:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:12.852 18:07:01 -- nvmf/common.sh@471 -- # waitforlisten 3343858 00:20:12.852 18:07:01 -- common/autotest_common.sh@817 -- # '[' -z 3343858 ']' 00:20:12.852 18:07:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.852 18:07:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:12.852 18:07:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.852 18:07:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:12.852 18:07:01 -- common/autotest_common.sh@10 -- # set +x 00:20:12.852 [2024-04-15 18:07:01.737082] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:20:12.852 [2024-04-15 18:07:01.737171] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.852 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.110 [2024-04-15 18:07:01.816893] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.110 [2024-04-15 18:07:01.915324] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.110 [2024-04-15 18:07:01.915392] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.110 [2024-04-15 18:07:01.915410] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.110 [2024-04-15 18:07:01.915424] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.110 [2024-04-15 18:07:01.915437] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.110 [2024-04-15 18:07:01.915482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.110 18:07:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:13.110 18:07:01 -- common/autotest_common.sh@850 -- # return 0 00:20:13.110 18:07:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:13.110 18:07:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:13.110 18:07:01 -- common/autotest_common.sh@10 -- # set +x 00:20:13.110 18:07:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.110 18:07:02 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:13.110 18:07:02 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:13.674 true 00:20:13.674 18:07:02 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:13.674 18:07:02 -- target/tls.sh@73 -- # jq -r .tls_version 00:20:14.239 18:07:02 -- target/tls.sh@73 -- # version=0 00:20:14.239 18:07:02 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:14.239 18:07:02 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:14.239 18:07:03 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:14.239 18:07:03 -- target/tls.sh@81 -- # jq -r .tls_version 00:20:14.805 18:07:03 -- target/tls.sh@81 -- # version=13 00:20:14.805 18:07:03 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:14.805 18:07:03 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:15.063 18:07:03 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:15.063 18:07:03 -- target/tls.sh@89 -- # jq -r .tls_version 00:20:15.332 18:07:04 -- target/tls.sh@89 -- # version=7 00:20:15.332 18:07:04 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:15.332 18:07:04 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:15.332 18:07:04 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:15.627 18:07:04 -- target/tls.sh@96 -- # ktls=false 00:20:15.627 18:07:04 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:15.627 18:07:04 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:15.896 18:07:04 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:15.896 18:07:04 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:16.154 18:07:05 -- target/tls.sh@104 -- # ktls=true 00:20:16.154 18:07:05 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:16.154 18:07:05 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:16.411 18:07:05 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:16.411 18:07:05 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:16.979 18:07:05 -- target/tls.sh@112 -- # ktls=false 00:20:16.979 18:07:05 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:16.979 18:07:05 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:16.979 18:07:05 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:16.979 18:07:05 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:16.979 18:07:05 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:20:16.979 18:07:05 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:20:16.979 18:07:05 -- nvmf/common.sh@693 -- # digest=1 00:20:16.979 18:07:05 -- nvmf/common.sh@694 -- # python - 00:20:16.979 18:07:05 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:16.979 18:07:05 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:16.979 18:07:05 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:16.979 18:07:05 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:16.979 18:07:05 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:20:16.979 18:07:05 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:20:16.979 18:07:05 -- nvmf/common.sh@693 -- # digest=1 00:20:16.979 18:07:05 -- nvmf/common.sh@694 -- # python - 00:20:16.979 18:07:05 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:16.979 18:07:05 -- target/tls.sh@121 -- # mktemp 00:20:16.979 18:07:05 -- target/tls.sh@121 -- # key_path=/tmp/tmp.XBG1NwBBl2 00:20:16.979 18:07:05 -- target/tls.sh@122 -- # mktemp 00:20:16.979 18:07:05 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Mbd291ZiPe 00:20:16.979 18:07:05 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:16.979 18:07:05 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:16.979 18:07:05 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.XBG1NwBBl2 00:20:16.979 18:07:05 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Mbd291ZiPe 00:20:16.979 18:07:05 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:17.300 18:07:06 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:17.557 18:07:06 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.XBG1NwBBl2 00:20:17.557 18:07:06 -- target/tls.sh@49 -- # local key=/tmp/tmp.XBG1NwBBl2 00:20:17.557 18:07:06 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:17.815 [2024-04-15 18:07:06.681148] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.815 18:07:06 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:18.073 18:07:06 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:18.330 [2024-04-15 18:07:07.246666] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:18.330 [2024-04-15 18:07:07.246940] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.330 18:07:07 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:18.589 malloc0 00:20:18.846 18:07:07 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:19.135 18:07:07 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XBG1NwBBl2 00:20:19.393 [2024-04-15 18:07:08.101880] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:19.393 18:07:08 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.XBG1NwBBl2 00:20:19.393 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.362 Initializing NVMe Controllers 00:20:29.362 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:29.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:29.362 Initialization complete. Launching workers. 00:20:29.362 ======================================================== 00:20:29.362 Latency(us) 00:20:29.362 Device Information : IOPS MiB/s Average min max 00:20:29.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7341.27 28.68 8720.77 1253.69 13127.03 00:20:29.362 ======================================================== 00:20:29.362 Total : 7341.27 28.68 8720.77 1253.69 13127.03 00:20:29.362 00:20:29.362 18:07:18 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XBG1NwBBl2 00:20:29.362 18:07:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:29.362 18:07:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:29.362 18:07:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:29.362 18:07:18 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XBG1NwBBl2' 00:20:29.362 18:07:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:29.362 18:07:18 -- target/tls.sh@28 -- # bdevperf_pid=3345751 00:20:29.362 18:07:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:29.362 18:07:18 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:29.362 18:07:18 -- target/tls.sh@31 -- # waitforlisten 3345751 /var/tmp/bdevperf.sock 00:20:29.362 18:07:18 -- common/autotest_common.sh@817 -- # '[' -z 3345751 ']' 00:20:29.362 18:07:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.362 18:07:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:29.362 18:07:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.362 18:07:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:29.362 18:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:29.362 [2024-04-15 18:07:18.273629] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:20:29.362 [2024-04-15 18:07:18.273727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3345751 ] 00:20:29.362 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.621 [2024-04-15 18:07:18.346040] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.621 [2024-04-15 18:07:18.443215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.879 18:07:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:29.879 18:07:18 -- common/autotest_common.sh@850 -- # return 0 00:20:29.879 18:07:18 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XBG1NwBBl2 00:20:30.445 [2024-04-15 18:07:19.171357] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:30.445 [2024-04-15 18:07:19.171488] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:30.445 TLSTESTn1 00:20:30.445 18:07:19 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:30.445 Running I/O for 10 seconds... 00:20:42.655 00:20:42.655 Latency(us) 00:20:42.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.655 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:42.655 Verification LBA range: start 0x0 length 0x2000 00:20:42.655 TLSTESTn1 : 10.10 1518.64 5.93 0.00 0.00 83954.10 6165.24 135149.80 00:20:42.655 =================================================================================================================== 00:20:42.655 Total : 1518.64 5.93 0.00 0.00 83954.10 6165.24 135149.80 00:20:42.655 0 00:20:42.655 18:07:29 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:42.655 18:07:29 -- target/tls.sh@45 -- # killprocess 3345751 00:20:42.655 18:07:29 -- common/autotest_common.sh@936 -- # '[' -z 3345751 ']' 00:20:42.655 18:07:29 -- common/autotest_common.sh@940 -- # kill -0 3345751 00:20:42.655 18:07:29 -- common/autotest_common.sh@941 -- # uname 00:20:42.655 18:07:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:42.655 18:07:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3345751 00:20:42.655 18:07:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:42.655 18:07:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:42.655 18:07:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3345751' 00:20:42.655 killing process with pid 3345751 00:20:42.655 18:07:29 -- common/autotest_common.sh@955 -- # kill 3345751 00:20:42.655 Received shutdown signal, test time was about 10.000000 seconds 00:20:42.655 00:20:42.655 Latency(us) 00:20:42.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.655 =================================================================================================================== 00:20:42.655 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:42.655 [2024-04-15 18:07:29.561739] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:42.655 18:07:29 -- common/autotest_common.sh@960 -- # wait 3345751 00:20:42.655 18:07:29 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Mbd291ZiPe 00:20:42.655 18:07:29 -- common/autotest_common.sh@638 -- # local es=0 00:20:42.655 18:07:29 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Mbd291ZiPe 00:20:42.655 18:07:29 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:42.655 18:07:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:42.655 18:07:29 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:42.655 18:07:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:42.655 18:07:29 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Mbd291ZiPe 00:20:42.655 18:07:29 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:42.655 18:07:29 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:42.655 18:07:29 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:42.655 18:07:29 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Mbd291ZiPe' 00:20:42.655 18:07:29 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:42.655 18:07:29 -- target/tls.sh@28 -- # bdevperf_pid=3347076 00:20:42.655 18:07:29 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:42.655 18:07:29 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:42.655 18:07:29 -- target/tls.sh@31 -- # waitforlisten 3347076 /var/tmp/bdevperf.sock 00:20:42.655 18:07:29 -- common/autotest_common.sh@817 -- # '[' -z 3347076 ']' 00:20:42.655 18:07:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.655 18:07:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:42.655 18:07:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.655 18:07:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:42.655 18:07:29 -- common/autotest_common.sh@10 -- # set +x 00:20:42.655 [2024-04-15 18:07:29.842001] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:20:42.655 [2024-04-15 18:07:29.842116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347076 ] 00:20:42.655 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.655 [2024-04-15 18:07:29.912062] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.655 [2024-04-15 18:07:29.992026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.655 18:07:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:42.655 18:07:30 -- common/autotest_common.sh@850 -- # return 0 00:20:42.655 18:07:30 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Mbd291ZiPe 00:20:42.655 [2024-04-15 18:07:30.447590] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:42.655 [2024-04-15 18:07:30.447721] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:42.655 [2024-04-15 18:07:30.456262] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:42.655 [2024-04-15 18:07:30.456814] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1113c30 (107): Transport endpoint is not connected 00:20:42.655 [2024-04-15 18:07:30.457794] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1113c30 (9): Bad file descriptor 00:20:42.655 [2024-04-15 18:07:30.458794] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:42.655 [2024-04-15 18:07:30.458815] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:42.655 [2024-04-15 18:07:30.458829] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:42.655 request: 00:20:42.655 { 00:20:42.655 "name": "TLSTEST", 00:20:42.655 "trtype": "tcp", 00:20:42.655 "traddr": "10.0.0.2", 00:20:42.655 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:42.655 "adrfam": "ipv4", 00:20:42.655 "trsvcid": "4420", 00:20:42.655 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.655 "psk": "/tmp/tmp.Mbd291ZiPe", 00:20:42.655 "method": "bdev_nvme_attach_controller", 00:20:42.655 "req_id": 1 00:20:42.655 } 00:20:42.655 Got JSON-RPC error response 00:20:42.655 response: 00:20:42.655 { 00:20:42.655 "code": -32602, 00:20:42.655 "message": "Invalid parameters" 00:20:42.655 } 00:20:42.655 18:07:30 -- target/tls.sh@36 -- # killprocess 3347076 00:20:42.655 18:07:30 -- common/autotest_common.sh@936 -- # '[' -z 3347076 ']' 00:20:42.655 18:07:30 -- common/autotest_common.sh@940 -- # kill -0 3347076 00:20:42.655 18:07:30 -- common/autotest_common.sh@941 -- # uname 00:20:42.655 18:07:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:42.655 18:07:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3347076 00:20:42.655 18:07:30 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:42.655 18:07:30 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:42.655 18:07:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3347076' 00:20:42.655 killing process with pid 3347076 00:20:42.655 18:07:30 -- common/autotest_common.sh@955 -- # kill 3347076 00:20:42.655 Received shutdown signal, test time was about 10.000000 seconds 00:20:42.655 00:20:42.655 Latency(us) 00:20:42.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.655 =================================================================================================================== 00:20:42.655 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:42.656 [2024-04-15 18:07:30.525311] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:42.656 18:07:30 -- common/autotest_common.sh@960 -- # wait 3347076 00:20:42.656 18:07:30 -- target/tls.sh@37 -- # return 1 00:20:42.656 18:07:30 -- common/autotest_common.sh@641 -- # es=1 00:20:42.656 18:07:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:42.656 18:07:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:42.656 18:07:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:42.656 18:07:30 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XBG1NwBBl2 00:20:42.656 18:07:30 -- common/autotest_common.sh@638 -- # local es=0 00:20:42.656 18:07:30 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XBG1NwBBl2 00:20:42.656 18:07:30 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:42.656 18:07:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:42.656 18:07:30 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:42.656 18:07:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:42.656 18:07:30 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XBG1NwBBl2 00:20:42.656 18:07:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:42.656 18:07:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:42.656 18:07:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:42.656 18:07:30 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XBG1NwBBl2' 00:20:42.656 18:07:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:42.656 18:07:30 -- target/tls.sh@28 -- # bdevperf_pid=3347216 00:20:42.656 18:07:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:42.656 18:07:30 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:42.656 18:07:30 -- target/tls.sh@31 -- # waitforlisten 3347216 /var/tmp/bdevperf.sock 00:20:42.656 18:07:30 -- common/autotest_common.sh@817 -- # '[' -z 3347216 ']' 00:20:42.656 18:07:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.656 18:07:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:42.656 18:07:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.656 18:07:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:42.656 18:07:30 -- common/autotest_common.sh@10 -- # set +x 00:20:42.656 [2024-04-15 18:07:30.826545] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:20:42.656 [2024-04-15 18:07:30.826724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347216 ] 00:20:42.656 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.656 [2024-04-15 18:07:30.934054] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.656 [2024-04-15 18:07:31.028396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.656 18:07:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:42.656 18:07:31 -- common/autotest_common.sh@850 -- # return 0 00:20:42.656 18:07:31 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.XBG1NwBBl2 00:20:42.915 [2024-04-15 18:07:31.647402] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:42.915 [2024-04-15 18:07:31.647519] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:42.915 [2024-04-15 18:07:31.659123] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:42.915 [2024-04-15 18:07:31.659158] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:42.915 [2024-04-15 18:07:31.659201] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:42.915 [2024-04-15 18:07:31.659392] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2250c30 (107): Transport endpoint is not connected 00:20:42.915 [2024-04-15 18:07:31.660382] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2250c30 (9): Bad file descriptor 00:20:42.915 [2024-04-15 18:07:31.661382] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:42.915 [2024-04-15 18:07:31.661403] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:42.916 [2024-04-15 18:07:31.661417] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:42.916 request: 00:20:42.916 { 00:20:42.916 "name": "TLSTEST", 00:20:42.916 "trtype": "tcp", 00:20:42.916 "traddr": "10.0.0.2", 00:20:42.916 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:42.916 "adrfam": "ipv4", 00:20:42.916 "trsvcid": "4420", 00:20:42.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.916 "psk": "/tmp/tmp.XBG1NwBBl2", 00:20:42.916 "method": "bdev_nvme_attach_controller", 00:20:42.916 "req_id": 1 00:20:42.916 } 00:20:42.916 Got JSON-RPC error response 00:20:42.916 response: 00:20:42.916 { 00:20:42.916 "code": -32602, 00:20:42.916 "message": "Invalid parameters" 00:20:42.916 } 00:20:42.916 18:07:31 -- target/tls.sh@36 -- # killprocess 3347216 00:20:42.916 18:07:31 -- common/autotest_common.sh@936 -- # '[' -z 3347216 ']' 00:20:42.916 18:07:31 -- common/autotest_common.sh@940 -- # kill -0 3347216 00:20:42.916 18:07:31 -- common/autotest_common.sh@941 -- # uname 00:20:42.916 18:07:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:42.916 18:07:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3347216 00:20:42.916 18:07:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:42.916 18:07:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:42.916 18:07:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3347216' 00:20:42.916 killing process with pid 3347216 00:20:42.916 18:07:31 -- common/autotest_common.sh@955 -- # kill 3347216 00:20:42.916 Received shutdown signal, test time was about 10.000000 seconds 00:20:42.916 00:20:42.916 Latency(us) 00:20:42.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.916 =================================================================================================================== 00:20:42.916 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:42.916 [2024-04-15 18:07:31.712839] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:42.916 18:07:31 -- common/autotest_common.sh@960 -- # wait 3347216 00:20:43.176 18:07:31 -- target/tls.sh@37 -- # return 1 00:20:43.176 18:07:31 -- common/autotest_common.sh@641 -- # es=1 00:20:43.177 18:07:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:43.177 18:07:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:43.177 18:07:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:43.177 18:07:31 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XBG1NwBBl2 00:20:43.177 18:07:31 -- common/autotest_common.sh@638 -- # local es=0 00:20:43.177 18:07:31 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XBG1NwBBl2 00:20:43.177 18:07:31 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:43.177 18:07:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:43.177 18:07:31 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:43.177 18:07:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:43.177 18:07:31 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XBG1NwBBl2 00:20:43.177 18:07:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:43.177 18:07:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:43.177 18:07:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:43.177 18:07:31 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XBG1NwBBl2' 00:20:43.177 18:07:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.177 18:07:31 -- target/tls.sh@28 -- # bdevperf_pid=3347361 00:20:43.177 18:07:31 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.177 18:07:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.177 18:07:31 -- target/tls.sh@31 -- # waitforlisten 3347361 /var/tmp/bdevperf.sock 00:20:43.177 18:07:31 -- common/autotest_common.sh@817 -- # '[' -z 3347361 ']' 00:20:43.177 18:07:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.177 18:07:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:43.177 18:07:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.177 18:07:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:43.177 18:07:31 -- common/autotest_common.sh@10 -- # set +x 00:20:43.177 [2024-04-15 18:07:32.019102] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:20:43.177 [2024-04-15 18:07:32.019279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347361 ] 00:20:43.177 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.177 [2024-04-15 18:07:32.125454] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.444 [2024-04-15 18:07:32.221505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.703 18:07:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:43.703 18:07:32 -- common/autotest_common.sh@850 -- # return 0 00:20:43.703 18:07:32 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XBG1NwBBl2 00:20:44.274 [2024-04-15 18:07:33.012979] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.274 [2024-04-15 18:07:33.013145] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:44.274 [2024-04-15 18:07:33.022784] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:44.274 [2024-04-15 18:07:33.022825] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:44.274 [2024-04-15 18:07:33.022874] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:44.274 [2024-04-15 18:07:33.023586] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf9c30 (107): Transport endpoint is not connected 00:20:44.274 [2024-04-15 18:07:33.024576] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf9c30 (9): Bad file descriptor 00:20:44.274 [2024-04-15 18:07:33.025575] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:44.274 [2024-04-15 18:07:33.025600] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:44.274 [2024-04-15 18:07:33.025618] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:44.274 request: 00:20:44.274 { 00:20:44.274 "name": "TLSTEST", 00:20:44.274 "trtype": "tcp", 00:20:44.274 "traddr": "10.0.0.2", 00:20:44.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:44.274 "adrfam": "ipv4", 00:20:44.274 "trsvcid": "4420", 00:20:44.274 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:44.274 "psk": "/tmp/tmp.XBG1NwBBl2", 00:20:44.274 "method": "bdev_nvme_attach_controller", 00:20:44.274 "req_id": 1 00:20:44.274 } 00:20:44.274 Got JSON-RPC error response 00:20:44.274 response: 00:20:44.274 { 00:20:44.274 "code": -32602, 00:20:44.274 "message": "Invalid parameters" 00:20:44.274 } 00:20:44.274 18:07:33 -- target/tls.sh@36 -- # killprocess 3347361 00:20:44.274 18:07:33 -- common/autotest_common.sh@936 -- # '[' -z 3347361 ']' 00:20:44.274 18:07:33 -- common/autotest_common.sh@940 -- # kill -0 3347361 00:20:44.274 18:07:33 -- common/autotest_common.sh@941 -- # uname 00:20:44.274 18:07:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:44.274 18:07:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3347361 00:20:44.274 18:07:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:44.274 18:07:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:44.274 18:07:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3347361' 00:20:44.274 killing process with pid 3347361 00:20:44.274 18:07:33 -- common/autotest_common.sh@955 -- # kill 3347361 00:20:44.274 Received shutdown signal, test time was about 10.000000 seconds 00:20:44.274 00:20:44.274 Latency(us) 00:20:44.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.274 =================================================================================================================== 00:20:44.274 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:44.274 [2024-04-15 18:07:33.079617] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:44.274 18:07:33 -- common/autotest_common.sh@960 -- # wait 3347361 00:20:44.532 18:07:33 -- target/tls.sh@37 -- # return 1 00:20:44.532 18:07:33 -- common/autotest_common.sh@641 -- # es=1 00:20:44.532 18:07:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:44.532 18:07:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:44.532 18:07:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:44.532 18:07:33 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:44.532 18:07:33 -- common/autotest_common.sh@638 -- # local es=0 00:20:44.532 18:07:33 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:44.532 18:07:33 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:44.532 18:07:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:44.532 18:07:33 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:44.532 18:07:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:44.532 18:07:33 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:44.532 18:07:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:44.532 18:07:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:44.532 18:07:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:44.532 18:07:33 -- target/tls.sh@23 -- # psk= 00:20:44.532 18:07:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:44.532 18:07:33 -- target/tls.sh@28 -- # bdevperf_pid=3347497 00:20:44.532 18:07:33 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.532 18:07:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:44.532 18:07:33 -- target/tls.sh@31 -- # waitforlisten 3347497 /var/tmp/bdevperf.sock 00:20:44.532 18:07:33 -- common/autotest_common.sh@817 -- # '[' -z 3347497 ']' 00:20:44.532 18:07:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.532 18:07:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:44.533 18:07:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.533 18:07:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:44.533 18:07:33 -- common/autotest_common.sh@10 -- # set +x 00:20:44.533 [2024-04-15 18:07:33.366886] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:20:44.533 [2024-04-15 18:07:33.366976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347497 ] 00:20:44.533 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.533 [2024-04-15 18:07:33.436093] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.791 [2024-04-15 18:07:33.524536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.791 18:07:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:44.791 18:07:33 -- common/autotest_common.sh@850 -- # return 0 00:20:44.791 18:07:33 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:45.049 [2024-04-15 18:07:33.955202] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:45.049 [2024-04-15 18:07:33.957253] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247e2c0 (9): Bad file descriptor 00:20:45.049 [2024-04-15 18:07:33.958247] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:45.049 [2024-04-15 18:07:33.958275] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:45.049 [2024-04-15 18:07:33.958292] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:45.049 request: 00:20:45.049 { 00:20:45.049 "name": "TLSTEST", 00:20:45.049 "trtype": "tcp", 00:20:45.049 "traddr": "10.0.0.2", 00:20:45.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.049 "adrfam": "ipv4", 00:20:45.049 "trsvcid": "4420", 00:20:45.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.049 "method": "bdev_nvme_attach_controller", 00:20:45.049 "req_id": 1 00:20:45.049 } 00:20:45.049 Got JSON-RPC error response 00:20:45.049 response: 00:20:45.049 { 00:20:45.049 "code": -32602, 00:20:45.049 "message": "Invalid parameters" 00:20:45.049 } 00:20:45.049 18:07:33 -- target/tls.sh@36 -- # killprocess 3347497 00:20:45.049 18:07:33 -- common/autotest_common.sh@936 -- # '[' -z 3347497 ']' 00:20:45.049 18:07:33 -- common/autotest_common.sh@940 -- # kill -0 3347497 00:20:45.049 18:07:33 -- common/autotest_common.sh@941 -- # uname 00:20:45.049 18:07:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:45.049 18:07:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3347497 00:20:45.313 18:07:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:45.313 18:07:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:45.313 18:07:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3347497' 00:20:45.313 killing process with pid 3347497 00:20:45.313 18:07:34 -- common/autotest_common.sh@955 -- # kill 3347497 00:20:45.313 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.313 00:20:45.313 Latency(us) 00:20:45.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.313 =================================================================================================================== 00:20:45.313 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:45.313 18:07:34 -- common/autotest_common.sh@960 -- # wait 3347497 00:20:45.313 18:07:34 -- target/tls.sh@37 -- # return 1 00:20:45.313 18:07:34 -- common/autotest_common.sh@641 -- # es=1 00:20:45.313 18:07:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:45.313 18:07:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:45.313 18:07:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:45.313 18:07:34 -- target/tls.sh@158 -- # killprocess 3343858 00:20:45.313 18:07:34 -- common/autotest_common.sh@936 -- # '[' -z 3343858 ']' 00:20:45.313 18:07:34 -- common/autotest_common.sh@940 -- # kill -0 3343858 00:20:45.313 18:07:34 -- common/autotest_common.sh@941 -- # uname 00:20:45.313 18:07:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:45.313 18:07:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3343858 00:20:45.313 18:07:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:45.313 18:07:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:45.313 18:07:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3343858' 00:20:45.313 killing process with pid 3343858 00:20:45.313 18:07:34 -- common/autotest_common.sh@955 -- # kill 3343858 00:20:45.313 [2024-04-15 18:07:34.234334] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:45.313 18:07:34 -- common/autotest_common.sh@960 -- # wait 3343858 00:20:45.624 18:07:34 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:45.624 18:07:34 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:45.624 18:07:34 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:45.624 18:07:34 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:20:45.624 18:07:34 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:45.624 18:07:34 -- nvmf/common.sh@693 -- # digest=2 00:20:45.624 18:07:34 -- nvmf/common.sh@694 -- # python - 00:20:45.624 18:07:34 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:45.882 18:07:34 -- target/tls.sh@160 -- # mktemp 00:20:45.882 18:07:34 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.m2FgGTfm5f 00:20:45.882 18:07:34 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:45.882 18:07:34 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.m2FgGTfm5f 00:20:45.882 18:07:34 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:45.882 18:07:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:45.882 18:07:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:45.882 18:07:34 -- common/autotest_common.sh@10 -- # set +x 00:20:45.882 18:07:34 -- nvmf/common.sh@470 -- # nvmfpid=3347651 00:20:45.882 18:07:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:45.883 18:07:34 -- nvmf/common.sh@471 -- # waitforlisten 3347651 00:20:45.883 18:07:34 -- common/autotest_common.sh@817 -- # '[' -z 3347651 ']' 00:20:45.883 18:07:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.883 18:07:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:45.883 18:07:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.883 18:07:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:45.883 18:07:34 -- common/autotest_common.sh@10 -- # set +x 00:20:45.883 [2024-04-15 18:07:34.617367] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:20:45.883 [2024-04-15 18:07:34.617461] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.883 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.883 [2024-04-15 18:07:34.694002] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.883 [2024-04-15 18:07:34.791917] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.883 [2024-04-15 18:07:34.791987] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.883 [2024-04-15 18:07:34.792004] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.883 [2024-04-15 18:07:34.792019] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.883 [2024-04-15 18:07:34.792031] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.883 [2024-04-15 18:07:34.792077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.141 18:07:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:46.141 18:07:34 -- common/autotest_common.sh@850 -- # return 0 00:20:46.141 18:07:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:46.141 18:07:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:46.141 18:07:34 -- common/autotest_common.sh@10 -- # set +x 00:20:46.141 18:07:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.141 18:07:34 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.m2FgGTfm5f 00:20:46.141 18:07:34 -- target/tls.sh@49 -- # local key=/tmp/tmp.m2FgGTfm5f 00:20:46.141 18:07:34 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:46.400 [2024-04-15 18:07:35.205134] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.400 18:07:35 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:46.969 18:07:35 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:47.536 [2024-04-15 18:07:36.183811] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:47.536 [2024-04-15 18:07:36.184108] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.536 18:07:36 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:47.796 malloc0 00:20:47.796 18:07:36 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:48.364 18:07:37 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m2FgGTfm5f 00:20:48.622 [2024-04-15 18:07:37.495501] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:48.622 18:07:37 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.m2FgGTfm5f 00:20:48.622 18:07:37 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:48.622 18:07:37 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:48.622 18:07:37 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:48.622 18:07:37 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.m2FgGTfm5f' 00:20:48.622 18:07:37 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:48.622 18:07:37 -- target/tls.sh@28 -- # bdevperf_pid=3348068 00:20:48.622 18:07:37 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:48.622 18:07:37 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:48.622 18:07:37 -- target/tls.sh@31 -- # waitforlisten 3348068 /var/tmp/bdevperf.sock 00:20:48.622 18:07:37 -- common/autotest_common.sh@817 -- # '[' -z 3348068 ']' 00:20:48.622 18:07:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.622 18:07:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:48.622 18:07:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.622 18:07:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:48.622 18:07:37 -- common/autotest_common.sh@10 -- # set +x 00:20:48.622 [2024-04-15 18:07:37.560435] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:20:48.622 [2024-04-15 18:07:37.560519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348068 ] 00:20:48.880 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.880 [2024-04-15 18:07:37.629136] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.880 [2024-04-15 18:07:37.721584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.880 18:07:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:48.880 18:07:37 -- common/autotest_common.sh@850 -- # return 0 00:20:48.880 18:07:37 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m2FgGTfm5f 00:20:49.446 [2024-04-15 18:07:38.198156] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:49.446 [2024-04-15 18:07:38.198287] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:49.446 TLSTESTn1 00:20:49.446 18:07:38 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:49.446 Running I/O for 10 seconds... 00:21:01.644 00:21:01.644 Latency(us) 00:21:01.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.644 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:01.644 Verification LBA range: start 0x0 length 0x2000 00:21:01.644 TLSTESTn1 : 10.05 2100.17 8.20 0.00 0.00 60798.83 8592.50 89711.50 00:21:01.644 =================================================================================================================== 00:21:01.644 Total : 2100.17 8.20 0.00 0.00 60798.83 8592.50 89711.50 00:21:01.644 0 00:21:01.644 18:07:48 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:01.644 18:07:48 -- target/tls.sh@45 -- # killprocess 3348068 00:21:01.644 18:07:48 -- common/autotest_common.sh@936 -- # '[' -z 3348068 ']' 00:21:01.644 18:07:48 -- common/autotest_common.sh@940 -- # kill -0 3348068 00:21:01.644 18:07:48 -- common/autotest_common.sh@941 -- # uname 00:21:01.644 18:07:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:01.644 18:07:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3348068 00:21:01.644 18:07:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:01.644 18:07:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:01.644 18:07:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3348068' 00:21:01.644 killing process with pid 3348068 00:21:01.644 18:07:48 -- common/autotest_common.sh@955 -- # kill 3348068 00:21:01.644 Received shutdown signal, test time was about 10.000000 seconds 00:21:01.644 00:21:01.644 Latency(us) 00:21:01.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.644 =================================================================================================================== 00:21:01.644 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:01.644 [2024-04-15 18:07:48.503329] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:01.644 18:07:48 -- common/autotest_common.sh@960 -- # wait 3348068 00:21:01.644 18:07:48 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.m2FgGTfm5f 00:21:01.644 18:07:48 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.m2FgGTfm5f 00:21:01.644 18:07:48 -- common/autotest_common.sh@638 -- # local es=0 00:21:01.644 18:07:48 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.m2FgGTfm5f 00:21:01.644 18:07:48 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:21:01.644 18:07:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:01.644 18:07:48 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:21:01.644 18:07:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:01.644 18:07:48 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.m2FgGTfm5f 00:21:01.644 18:07:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:01.644 18:07:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:01.644 18:07:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:01.644 18:07:48 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.m2FgGTfm5f' 00:21:01.644 18:07:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.644 18:07:48 -- target/tls.sh@28 -- # bdevperf_pid=3349384 00:21:01.644 18:07:48 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:01.644 18:07:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:01.644 18:07:48 -- target/tls.sh@31 -- # waitforlisten 3349384 /var/tmp/bdevperf.sock 00:21:01.644 18:07:48 -- common/autotest_common.sh@817 -- # '[' -z 3349384 ']' 00:21:01.644 18:07:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.644 18:07:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:01.644 18:07:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.644 18:07:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:01.644 18:07:48 -- common/autotest_common.sh@10 -- # set +x 00:21:01.644 [2024-04-15 18:07:48.809710] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:01.644 [2024-04-15 18:07:48.809896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349384 ] 00:21:01.644 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.644 [2024-04-15 18:07:48.912816] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.644 [2024-04-15 18:07:49.005374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.644 18:07:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:01.644 18:07:49 -- common/autotest_common.sh@850 -- # return 0 00:21:01.644 18:07:49 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m2FgGTfm5f 00:21:01.644 [2024-04-15 18:07:49.720619] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:01.644 [2024-04-15 18:07:49.720692] bdev_nvme.c:6046:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:01.644 [2024-04-15 18:07:49.720708] bdev_nvme.c:6155:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.m2FgGTfm5f 00:21:01.644 request: 00:21:01.644 { 00:21:01.644 "name": "TLSTEST", 00:21:01.644 "trtype": "tcp", 00:21:01.644 "traddr": "10.0.0.2", 00:21:01.644 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.644 "adrfam": "ipv4", 00:21:01.644 "trsvcid": "4420", 00:21:01.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.644 "psk": "/tmp/tmp.m2FgGTfm5f", 00:21:01.644 "method": "bdev_nvme_attach_controller", 00:21:01.644 "req_id": 1 00:21:01.644 } 00:21:01.644 Got JSON-RPC error response 00:21:01.644 response: 00:21:01.644 { 00:21:01.644 "code": -1, 00:21:01.644 "message": "Operation not permitted" 00:21:01.644 } 00:21:01.644 18:07:49 -- target/tls.sh@36 -- # killprocess 3349384 00:21:01.644 18:07:49 -- common/autotest_common.sh@936 -- # '[' -z 3349384 ']' 00:21:01.644 18:07:49 -- common/autotest_common.sh@940 -- # kill -0 3349384 00:21:01.644 18:07:49 -- common/autotest_common.sh@941 -- # uname 00:21:01.644 18:07:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:01.644 18:07:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3349384 00:21:01.644 18:07:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:01.644 18:07:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:01.644 18:07:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3349384' 00:21:01.644 killing process with pid 3349384 00:21:01.644 18:07:49 -- common/autotest_common.sh@955 -- # kill 3349384 00:21:01.644 Received shutdown signal, test time was about 10.000000 seconds 00:21:01.644 00:21:01.644 Latency(us) 00:21:01.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.644 =================================================================================================================== 00:21:01.644 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:01.644 18:07:49 -- common/autotest_common.sh@960 -- # wait 3349384 00:21:01.644 18:07:49 -- target/tls.sh@37 -- # return 1 00:21:01.644 18:07:49 -- common/autotest_common.sh@641 -- # es=1 00:21:01.644 18:07:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:01.644 18:07:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:01.644 18:07:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:01.644 18:07:49 -- target/tls.sh@174 -- # killprocess 3347651 00:21:01.644 18:07:49 -- common/autotest_common.sh@936 -- # '[' -z 3347651 ']' 00:21:01.644 18:07:49 -- common/autotest_common.sh@940 -- # kill -0 3347651 00:21:01.644 18:07:49 -- common/autotest_common.sh@941 -- # uname 00:21:01.644 18:07:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:01.644 18:07:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3347651 00:21:01.644 18:07:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:01.644 18:07:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:01.644 18:07:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3347651' 00:21:01.644 killing process with pid 3347651 00:21:01.644 18:07:49 -- common/autotest_common.sh@955 -- # kill 3347651 00:21:01.645 [2024-04-15 18:07:49.992406] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:01.645 18:07:49 -- common/autotest_common.sh@960 -- # wait 3347651 00:21:01.645 18:07:50 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:01.645 18:07:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:01.645 18:07:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:01.645 18:07:50 -- common/autotest_common.sh@10 -- # set +x 00:21:01.645 18:07:50 -- nvmf/common.sh@470 -- # nvmfpid=3349543 00:21:01.645 18:07:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:01.645 18:07:50 -- nvmf/common.sh@471 -- # waitforlisten 3349543 00:21:01.645 18:07:50 -- common/autotest_common.sh@817 -- # '[' -z 3349543 ']' 00:21:01.645 18:07:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.645 18:07:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:01.645 18:07:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.645 18:07:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:01.645 18:07:50 -- common/autotest_common.sh@10 -- # set +x 00:21:01.645 [2024-04-15 18:07:50.281527] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:01.645 [2024-04-15 18:07:50.281618] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.645 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.645 [2024-04-15 18:07:50.357676] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.645 [2024-04-15 18:07:50.455109] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.645 [2024-04-15 18:07:50.455188] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.645 [2024-04-15 18:07:50.455212] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.645 [2024-04-15 18:07:50.455227] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.645 [2024-04-15 18:07:50.455240] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.645 [2024-04-15 18:07:50.455290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.645 18:07:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:01.645 18:07:50 -- common/autotest_common.sh@850 -- # return 0 00:21:01.645 18:07:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:01.645 18:07:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:01.645 18:07:50 -- common/autotest_common.sh@10 -- # set +x 00:21:01.904 18:07:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.904 18:07:50 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.m2FgGTfm5f 00:21:01.904 18:07:50 -- common/autotest_common.sh@638 -- # local es=0 00:21:01.904 18:07:50 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.m2FgGTfm5f 00:21:01.904 18:07:50 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:21:01.904 18:07:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:01.904 18:07:50 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:21:01.904 18:07:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:01.904 18:07:50 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.m2FgGTfm5f 00:21:01.904 18:07:50 -- target/tls.sh@49 -- # local key=/tmp/tmp.m2FgGTfm5f 00:21:01.904 18:07:50 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:02.163 [2024-04-15 18:07:50.919413] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.163 18:07:50 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:02.422 18:07:51 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:02.987 [2024-04-15 18:07:51.857969] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:02.987 [2024-04-15 18:07:51.858275] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.988 18:07:51 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:03.556 malloc0 00:21:03.556 18:07:52 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:04.124 18:07:53 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m2FgGTfm5f 00:21:04.691 [2024-04-15 18:07:53.342290] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:04.691 [2024-04-15 18:07:53.342335] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:04.691 [2024-04-15 18:07:53.342366] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:21:04.691 request: 00:21:04.691 { 00:21:04.691 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.691 "host": "nqn.2016-06.io.spdk:host1", 00:21:04.691 "psk": "/tmp/tmp.m2FgGTfm5f", 00:21:04.691 "method": "nvmf_subsystem_add_host", 00:21:04.691 "req_id": 1 00:21:04.691 } 00:21:04.691 Got JSON-RPC error response 00:21:04.691 response: 00:21:04.691 { 00:21:04.691 "code": -32603, 00:21:04.691 "message": "Internal error" 00:21:04.691 } 00:21:04.691 18:07:53 -- common/autotest_common.sh@641 -- # es=1 00:21:04.691 18:07:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:04.691 18:07:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:04.691 18:07:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:04.691 18:07:53 -- target/tls.sh@180 -- # killprocess 3349543 00:21:04.691 18:07:53 -- common/autotest_common.sh@936 -- # '[' -z 3349543 ']' 00:21:04.691 18:07:53 -- common/autotest_common.sh@940 -- # kill -0 3349543 00:21:04.691 18:07:53 -- common/autotest_common.sh@941 -- # uname 00:21:04.691 18:07:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:04.691 18:07:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3349543 00:21:04.691 18:07:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:04.691 18:07:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:04.691 18:07:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3349543' 00:21:04.691 killing process with pid 3349543 00:21:04.691 18:07:53 -- common/autotest_common.sh@955 -- # kill 3349543 00:21:04.691 18:07:53 -- common/autotest_common.sh@960 -- # wait 3349543 00:21:04.691 18:07:53 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.m2FgGTfm5f 00:21:04.691 18:07:53 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:04.691 18:07:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:04.691 18:07:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:04.691 18:07:53 -- common/autotest_common.sh@10 -- # set +x 00:21:04.691 18:07:53 -- nvmf/common.sh@470 -- # nvmfpid=3349968 00:21:04.691 18:07:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:04.691 18:07:53 -- nvmf/common.sh@471 -- # waitforlisten 3349968 00:21:04.691 18:07:53 -- common/autotest_common.sh@817 -- # '[' -z 3349968 ']' 00:21:04.691 18:07:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.691 18:07:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:04.691 18:07:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.691 18:07:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:04.691 18:07:53 -- common/autotest_common.sh@10 -- # set +x 00:21:04.950 [2024-04-15 18:07:53.679381] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:04.950 [2024-04-15 18:07:53.679472] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.950 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.950 [2024-04-15 18:07:53.756600] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.950 [2024-04-15 18:07:53.848855] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.950 [2024-04-15 18:07:53.848924] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.950 [2024-04-15 18:07:53.848948] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.950 [2024-04-15 18:07:53.848963] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.950 [2024-04-15 18:07:53.848976] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.950 [2024-04-15 18:07:53.849016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.210 18:07:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:05.210 18:07:53 -- common/autotest_common.sh@850 -- # return 0 00:21:05.210 18:07:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:05.210 18:07:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:05.210 18:07:53 -- common/autotest_common.sh@10 -- # set +x 00:21:05.210 18:07:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.210 18:07:53 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.m2FgGTfm5f 00:21:05.210 18:07:53 -- target/tls.sh@49 -- # local key=/tmp/tmp.m2FgGTfm5f 00:21:05.210 18:07:53 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:05.789 [2024-04-15 18:07:54.516405] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.789 18:07:54 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:06.364 18:07:55 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:06.932 [2024-04-15 18:07:55.627413] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:06.932 [2024-04-15 18:07:55.627697] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.932 18:07:55 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:07.530 malloc0 00:21:07.530 18:07:56 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:08.095 18:07:56 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m2FgGTfm5f 00:21:08.352 [2024-04-15 18:07:57.136199] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:08.352 18:07:57 -- target/tls.sh@188 -- # bdevperf_pid=3350394 00:21:08.352 18:07:57 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:08.352 18:07:57 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:08.352 18:07:57 -- target/tls.sh@191 -- # waitforlisten 3350394 /var/tmp/bdevperf.sock 00:21:08.352 18:07:57 -- common/autotest_common.sh@817 -- # '[' -z 3350394 ']' 00:21:08.352 18:07:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:08.352 18:07:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:08.353 18:07:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:08.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:08.353 18:07:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:08.353 18:07:57 -- common/autotest_common.sh@10 -- # set +x 00:21:08.353 [2024-04-15 18:07:57.199418] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:08.353 [2024-04-15 18:07:57.199509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350394 ] 00:21:08.353 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.353 [2024-04-15 18:07:57.269890] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.610 [2024-04-15 18:07:57.363409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.610 18:07:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:08.610 18:07:57 -- common/autotest_common.sh@850 -- # return 0 00:21:08.610 18:07:57 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m2FgGTfm5f 00:21:09.176 [2024-04-15 18:07:57.855815] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:09.176 [2024-04-15 18:07:57.855954] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:09.176 TLSTESTn1 00:21:09.176 18:07:57 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:09.743 18:07:58 -- target/tls.sh@196 -- # tgtconf='{ 00:21:09.743 "subsystems": [ 00:21:09.743 { 00:21:09.743 "subsystem": "keyring", 00:21:09.743 "config": [] 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "subsystem": "iobuf", 00:21:09.743 "config": [ 00:21:09.743 { 00:21:09.743 "method": "iobuf_set_options", 00:21:09.743 "params": { 00:21:09.743 "small_pool_count": 8192, 00:21:09.743 "large_pool_count": 1024, 00:21:09.743 "small_bufsize": 8192, 00:21:09.743 "large_bufsize": 135168 00:21:09.743 } 00:21:09.743 } 00:21:09.743 ] 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "subsystem": "sock", 00:21:09.743 "config": [ 00:21:09.743 { 00:21:09.743 "method": "sock_impl_set_options", 00:21:09.743 "params": { 00:21:09.743 "impl_name": "posix", 00:21:09.743 "recv_buf_size": 2097152, 00:21:09.743 "send_buf_size": 2097152, 00:21:09.743 "enable_recv_pipe": true, 00:21:09.743 "enable_quickack": false, 00:21:09.743 "enable_placement_id": 0, 00:21:09.743 "enable_zerocopy_send_server": true, 00:21:09.743 "enable_zerocopy_send_client": false, 00:21:09.743 "zerocopy_threshold": 0, 00:21:09.743 "tls_version": 0, 00:21:09.743 "enable_ktls": false 00:21:09.743 } 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "method": "sock_impl_set_options", 00:21:09.743 "params": { 00:21:09.743 "impl_name": "ssl", 00:21:09.743 "recv_buf_size": 4096, 00:21:09.743 "send_buf_size": 4096, 00:21:09.743 "enable_recv_pipe": true, 00:21:09.743 "enable_quickack": false, 00:21:09.743 "enable_placement_id": 0, 00:21:09.743 "enable_zerocopy_send_server": true, 00:21:09.743 "enable_zerocopy_send_client": false, 00:21:09.743 "zerocopy_threshold": 0, 00:21:09.743 "tls_version": 0, 00:21:09.743 "enable_ktls": false 00:21:09.743 } 00:21:09.743 } 00:21:09.743 ] 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "subsystem": "vmd", 00:21:09.743 "config": [] 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "subsystem": "accel", 00:21:09.743 "config": [ 00:21:09.743 { 00:21:09.743 "method": "accel_set_options", 00:21:09.743 "params": { 00:21:09.743 "small_cache_size": 128, 00:21:09.743 "large_cache_size": 16, 00:21:09.743 "task_count": 2048, 00:21:09.743 "sequence_count": 2048, 00:21:09.743 "buf_count": 2048 00:21:09.743 } 00:21:09.743 } 00:21:09.743 ] 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "subsystem": "bdev", 00:21:09.743 "config": [ 00:21:09.743 { 00:21:09.743 "method": "bdev_set_options", 00:21:09.743 "params": { 00:21:09.743 "bdev_io_pool_size": 65535, 00:21:09.743 "bdev_io_cache_size": 256, 00:21:09.743 "bdev_auto_examine": true, 00:21:09.743 "iobuf_small_cache_size": 128, 00:21:09.743 "iobuf_large_cache_size": 16 00:21:09.743 } 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "method": "bdev_raid_set_options", 00:21:09.743 "params": { 00:21:09.743 "process_window_size_kb": 1024 00:21:09.743 } 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "method": "bdev_iscsi_set_options", 00:21:09.743 "params": { 00:21:09.743 "timeout_sec": 30 00:21:09.743 } 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "method": "bdev_nvme_set_options", 00:21:09.743 "params": { 00:21:09.743 "action_on_timeout": "none", 00:21:09.743 "timeout_us": 0, 00:21:09.743 "timeout_admin_us": 0, 00:21:09.743 "keep_alive_timeout_ms": 10000, 00:21:09.743 "arbitration_burst": 0, 00:21:09.743 "low_priority_weight": 0, 00:21:09.743 "medium_priority_weight": 0, 00:21:09.743 "high_priority_weight": 0, 00:21:09.743 "nvme_adminq_poll_period_us": 10000, 00:21:09.743 "nvme_ioq_poll_period_us": 0, 00:21:09.743 "io_queue_requests": 0, 00:21:09.743 "delay_cmd_submit": true, 00:21:09.743 "transport_retry_count": 4, 00:21:09.743 "bdev_retry_count": 3, 00:21:09.743 "transport_ack_timeout": 0, 00:21:09.743 "ctrlr_loss_timeout_sec": 0, 00:21:09.743 "reconnect_delay_sec": 0, 00:21:09.743 "fast_io_fail_timeout_sec": 0, 00:21:09.743 "disable_auto_failback": false, 00:21:09.743 "generate_uuids": false, 00:21:09.743 "transport_tos": 0, 00:21:09.743 "nvme_error_stat": false, 00:21:09.743 "rdma_srq_size": 0, 00:21:09.743 "io_path_stat": false, 00:21:09.743 "allow_accel_sequence": false, 00:21:09.743 "rdma_max_cq_size": 0, 00:21:09.743 "rdma_cm_event_timeout_ms": 0, 00:21:09.743 "dhchap_digests": [ 00:21:09.743 "sha256", 00:21:09.743 "sha384", 00:21:09.743 "sha512" 00:21:09.743 ], 00:21:09.743 "dhchap_dhgroups": [ 00:21:09.743 "null", 00:21:09.743 "ffdhe2048", 00:21:09.743 "ffdhe3072", 00:21:09.743 "ffdhe4096", 00:21:09.743 "ffdhe6144", 00:21:09.743 "ffdhe8192" 00:21:09.743 ] 00:21:09.743 } 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "method": "bdev_nvme_set_hotplug", 00:21:09.743 "params": { 00:21:09.743 "period_us": 100000, 00:21:09.743 "enable": false 00:21:09.743 } 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "method": "bdev_malloc_create", 00:21:09.743 "params": { 00:21:09.743 "name": "malloc0", 00:21:09.743 "num_blocks": 8192, 00:21:09.743 "block_size": 4096, 00:21:09.743 "physical_block_size": 4096, 00:21:09.743 "uuid": "659f3c5b-f328-4fe1-9616-5ad0485a5499", 00:21:09.743 "optimal_io_boundary": 0 00:21:09.743 } 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "method": "bdev_wait_for_examine" 00:21:09.743 } 00:21:09.743 ] 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "subsystem": "nbd", 00:21:09.743 "config": [] 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "subsystem": "scheduler", 00:21:09.743 "config": [ 00:21:09.743 { 00:21:09.743 "method": "framework_set_scheduler", 00:21:09.743 "params": { 00:21:09.743 "name": "static" 00:21:09.743 } 00:21:09.743 } 00:21:09.743 ] 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "subsystem": "nvmf", 00:21:09.743 "config": [ 00:21:09.743 { 00:21:09.743 "method": "nvmf_set_config", 00:21:09.743 "params": { 00:21:09.743 "discovery_filter": "match_any", 00:21:09.743 "admin_cmd_passthru": { 00:21:09.743 "identify_ctrlr": false 00:21:09.743 } 00:21:09.743 } 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "method": "nvmf_set_max_subsystems", 00:21:09.743 "params": { 00:21:09.743 "max_subsystems": 1024 00:21:09.743 } 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "method": "nvmf_set_crdt", 00:21:09.743 "params": { 00:21:09.743 "crdt1": 0, 00:21:09.743 "crdt2": 0, 00:21:09.743 "crdt3": 0 00:21:09.743 } 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "method": "nvmf_create_transport", 00:21:09.743 "params": { 00:21:09.743 "trtype": "TCP", 00:21:09.743 "max_queue_depth": 128, 00:21:09.743 "max_io_qpairs_per_ctrlr": 127, 00:21:09.743 "in_capsule_data_size": 4096, 00:21:09.743 "max_io_size": 131072, 00:21:09.743 "io_unit_size": 131072, 00:21:09.743 "max_aq_depth": 128, 00:21:09.743 "num_shared_buffers": 511, 00:21:09.743 "buf_cache_size": 4294967295, 00:21:09.743 "dif_insert_or_strip": false, 00:21:09.743 "zcopy": false, 00:21:09.743 "c2h_success": false, 00:21:09.743 "sock_priority": 0, 00:21:09.743 "abort_timeout_sec": 1, 00:21:09.743 "ack_timeout": 0 00:21:09.743 } 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "method": "nvmf_create_subsystem", 00:21:09.743 "params": { 00:21:09.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.743 "allow_any_host": false, 00:21:09.743 "serial_number": "SPDK00000000000001", 00:21:09.743 "model_number": "SPDK bdev Controller", 00:21:09.743 "max_namespaces": 10, 00:21:09.743 "min_cntlid": 1, 00:21:09.743 "max_cntlid": 65519, 00:21:09.743 "ana_reporting": false 00:21:09.743 } 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "method": "nvmf_subsystem_add_host", 00:21:09.743 "params": { 00:21:09.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.743 "host": "nqn.2016-06.io.spdk:host1", 00:21:09.743 "psk": "/tmp/tmp.m2FgGTfm5f" 00:21:09.743 } 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "method": "nvmf_subsystem_add_ns", 00:21:09.743 "params": { 00:21:09.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.743 "namespace": { 00:21:09.743 "nsid": 1, 00:21:09.743 "bdev_name": "malloc0", 00:21:09.743 "nguid": "659F3C5BF3284FE196165AD0485A5499", 00:21:09.743 "uuid": "659f3c5b-f328-4fe1-9616-5ad0485a5499", 00:21:09.743 "no_auto_visible": false 00:21:09.743 } 00:21:09.743 } 00:21:09.743 }, 00:21:09.743 { 00:21:09.743 "method": "nvmf_subsystem_add_listener", 00:21:09.743 "params": { 00:21:09.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.743 "listen_address": { 00:21:09.743 "trtype": "TCP", 00:21:09.743 "adrfam": "IPv4", 00:21:09.743 "traddr": "10.0.0.2", 00:21:09.743 "trsvcid": "4420" 00:21:09.743 }, 00:21:09.743 "secure_channel": true 00:21:09.743 } 00:21:09.743 } 00:21:09.743 ] 00:21:09.743 } 00:21:09.743 ] 00:21:09.743 }' 00:21:09.743 18:07:58 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:10.002 18:07:58 -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:10.002 "subsystems": [ 00:21:10.002 { 00:21:10.002 "subsystem": "keyring", 00:21:10.002 "config": [] 00:21:10.002 }, 00:21:10.002 { 00:21:10.002 "subsystem": "iobuf", 00:21:10.002 "config": [ 00:21:10.002 { 00:21:10.002 "method": "iobuf_set_options", 00:21:10.002 "params": { 00:21:10.002 "small_pool_count": 8192, 00:21:10.002 "large_pool_count": 1024, 00:21:10.002 "small_bufsize": 8192, 00:21:10.002 "large_bufsize": 135168 00:21:10.002 } 00:21:10.002 } 00:21:10.002 ] 00:21:10.002 }, 00:21:10.002 { 00:21:10.002 "subsystem": "sock", 00:21:10.002 "config": [ 00:21:10.002 { 00:21:10.002 "method": "sock_impl_set_options", 00:21:10.002 "params": { 00:21:10.002 "impl_name": "posix", 00:21:10.002 "recv_buf_size": 2097152, 00:21:10.002 "send_buf_size": 2097152, 00:21:10.002 "enable_recv_pipe": true, 00:21:10.002 "enable_quickack": false, 00:21:10.002 "enable_placement_id": 0, 00:21:10.002 "enable_zerocopy_send_server": true, 00:21:10.002 "enable_zerocopy_send_client": false, 00:21:10.002 "zerocopy_threshold": 0, 00:21:10.002 "tls_version": 0, 00:21:10.002 "enable_ktls": false 00:21:10.002 } 00:21:10.002 }, 00:21:10.002 { 00:21:10.002 "method": "sock_impl_set_options", 00:21:10.002 "params": { 00:21:10.002 "impl_name": "ssl", 00:21:10.002 "recv_buf_size": 4096, 00:21:10.002 "send_buf_size": 4096, 00:21:10.002 "enable_recv_pipe": true, 00:21:10.002 "enable_quickack": false, 00:21:10.002 "enable_placement_id": 0, 00:21:10.002 "enable_zerocopy_send_server": true, 00:21:10.002 "enable_zerocopy_send_client": false, 00:21:10.002 "zerocopy_threshold": 0, 00:21:10.002 "tls_version": 0, 00:21:10.002 "enable_ktls": false 00:21:10.002 } 00:21:10.002 } 00:21:10.002 ] 00:21:10.002 }, 00:21:10.002 { 00:21:10.002 "subsystem": "vmd", 00:21:10.002 "config": [] 00:21:10.002 }, 00:21:10.002 { 00:21:10.002 "subsystem": "accel", 00:21:10.002 "config": [ 00:21:10.002 { 00:21:10.002 "method": "accel_set_options", 00:21:10.002 "params": { 00:21:10.002 "small_cache_size": 128, 00:21:10.002 "large_cache_size": 16, 00:21:10.002 "task_count": 2048, 00:21:10.002 "sequence_count": 2048, 00:21:10.002 "buf_count": 2048 00:21:10.002 } 00:21:10.002 } 00:21:10.002 ] 00:21:10.002 }, 00:21:10.002 { 00:21:10.002 "subsystem": "bdev", 00:21:10.002 "config": [ 00:21:10.002 { 00:21:10.002 "method": "bdev_set_options", 00:21:10.002 "params": { 00:21:10.002 "bdev_io_pool_size": 65535, 00:21:10.002 "bdev_io_cache_size": 256, 00:21:10.002 "bdev_auto_examine": true, 00:21:10.002 "iobuf_small_cache_size": 128, 00:21:10.002 "iobuf_large_cache_size": 16 00:21:10.002 } 00:21:10.002 }, 00:21:10.002 { 00:21:10.002 "method": "bdev_raid_set_options", 00:21:10.002 "params": { 00:21:10.002 "process_window_size_kb": 1024 00:21:10.002 } 00:21:10.002 }, 00:21:10.002 { 00:21:10.002 "method": "bdev_iscsi_set_options", 00:21:10.002 "params": { 00:21:10.002 "timeout_sec": 30 00:21:10.002 } 00:21:10.002 }, 00:21:10.002 { 00:21:10.002 "method": "bdev_nvme_set_options", 00:21:10.002 "params": { 00:21:10.002 "action_on_timeout": "none", 00:21:10.002 "timeout_us": 0, 00:21:10.002 "timeout_admin_us": 0, 00:21:10.002 "keep_alive_timeout_ms": 10000, 00:21:10.002 "arbitration_burst": 0, 00:21:10.002 "low_priority_weight": 0, 00:21:10.002 "medium_priority_weight": 0, 00:21:10.002 "high_priority_weight": 0, 00:21:10.002 "nvme_adminq_poll_period_us": 10000, 00:21:10.002 "nvme_ioq_poll_period_us": 0, 00:21:10.002 "io_queue_requests": 512, 00:21:10.002 "delay_cmd_submit": true, 00:21:10.002 "transport_retry_count": 4, 00:21:10.002 "bdev_retry_count": 3, 00:21:10.002 "transport_ack_timeout": 0, 00:21:10.002 "ctrlr_loss_timeout_sec": 0, 00:21:10.002 "reconnect_delay_sec": 0, 00:21:10.002 "fast_io_fail_timeout_sec": 0, 00:21:10.003 "disable_auto_failback": false, 00:21:10.003 "generate_uuids": false, 00:21:10.003 "transport_tos": 0, 00:21:10.003 "nvme_error_stat": false, 00:21:10.003 "rdma_srq_size": 0, 00:21:10.003 "io_path_stat": false, 00:21:10.003 "allow_accel_sequence": false, 00:21:10.003 "rdma_max_cq_size": 0, 00:21:10.003 "rdma_cm_event_timeout_ms": 0, 00:21:10.003 "dhchap_digests": [ 00:21:10.003 "sha256", 00:21:10.003 "sha384", 00:21:10.003 "sha512" 00:21:10.003 ], 00:21:10.003 "dhchap_dhgroups": [ 00:21:10.003 "null", 00:21:10.003 "ffdhe2048", 00:21:10.003 "ffdhe3072", 00:21:10.003 "ffdhe4096", 00:21:10.003 "ffdhe6144", 00:21:10.003 "ffdhe8192" 00:21:10.003 ] 00:21:10.003 } 00:21:10.003 }, 00:21:10.003 { 00:21:10.003 "method": "bdev_nvme_attach_controller", 00:21:10.003 "params": { 00:21:10.003 "name": "TLSTEST", 00:21:10.003 "trtype": "TCP", 00:21:10.003 "adrfam": "IPv4", 00:21:10.003 "traddr": "10.0.0.2", 00:21:10.003 "trsvcid": "4420", 00:21:10.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.003 "prchk_reftag": false, 00:21:10.003 "prchk_guard": false, 00:21:10.003 "ctrlr_loss_timeout_sec": 0, 00:21:10.003 "reconnect_delay_sec": 0, 00:21:10.003 "fast_io_fail_timeout_sec": 0, 00:21:10.003 "psk": "/tmp/tmp.m2FgGTfm5f", 00:21:10.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:10.003 "hdgst": false, 00:21:10.003 "ddgst": false 00:21:10.003 } 00:21:10.003 }, 00:21:10.003 { 00:21:10.003 "method": "bdev_nvme_set_hotplug", 00:21:10.003 "params": { 00:21:10.003 "period_us": 100000, 00:21:10.003 "enable": false 00:21:10.003 } 00:21:10.003 }, 00:21:10.003 { 00:21:10.003 "method": "bdev_wait_for_examine" 00:21:10.003 } 00:21:10.003 ] 00:21:10.003 }, 00:21:10.003 { 00:21:10.003 "subsystem": "nbd", 00:21:10.003 "config": [] 00:21:10.003 } 00:21:10.003 ] 00:21:10.003 }' 00:21:10.003 18:07:58 -- target/tls.sh@199 -- # killprocess 3350394 00:21:10.003 18:07:58 -- common/autotest_common.sh@936 -- # '[' -z 3350394 ']' 00:21:10.003 18:07:58 -- common/autotest_common.sh@940 -- # kill -0 3350394 00:21:10.003 18:07:58 -- common/autotest_common.sh@941 -- # uname 00:21:10.003 18:07:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:10.003 18:07:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3350394 00:21:10.261 18:07:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:10.261 18:07:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:10.261 18:07:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3350394' 00:21:10.261 killing process with pid 3350394 00:21:10.261 18:07:58 -- common/autotest_common.sh@955 -- # kill 3350394 00:21:10.261 Received shutdown signal, test time was about 10.000000 seconds 00:21:10.261 00:21:10.261 Latency(us) 00:21:10.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.261 =================================================================================================================== 00:21:10.261 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:10.261 [2024-04-15 18:07:58.957192] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:10.261 18:07:58 -- common/autotest_common.sh@960 -- # wait 3350394 00:21:10.261 18:07:59 -- target/tls.sh@200 -- # killprocess 3349968 00:21:10.261 18:07:59 -- common/autotest_common.sh@936 -- # '[' -z 3349968 ']' 00:21:10.261 18:07:59 -- common/autotest_common.sh@940 -- # kill -0 3349968 00:21:10.261 18:07:59 -- common/autotest_common.sh@941 -- # uname 00:21:10.261 18:07:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:10.261 18:07:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3349968 00:21:10.261 18:07:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:10.261 18:07:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:10.261 18:07:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3349968' 00:21:10.261 killing process with pid 3349968 00:21:10.261 18:07:59 -- common/autotest_common.sh@955 -- # kill 3349968 00:21:10.261 [2024-04-15 18:07:59.209027] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:10.261 18:07:59 -- common/autotest_common.sh@960 -- # wait 3349968 00:21:10.520 18:07:59 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:10.520 18:07:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:10.520 18:07:59 -- target/tls.sh@203 -- # echo '{ 00:21:10.520 "subsystems": [ 00:21:10.520 { 00:21:10.520 "subsystem": "keyring", 00:21:10.520 "config": [] 00:21:10.520 }, 00:21:10.520 { 00:21:10.520 "subsystem": "iobuf", 00:21:10.520 "config": [ 00:21:10.520 { 00:21:10.520 "method": "iobuf_set_options", 00:21:10.520 "params": { 00:21:10.520 "small_pool_count": 8192, 00:21:10.520 "large_pool_count": 1024, 00:21:10.520 "small_bufsize": 8192, 00:21:10.520 "large_bufsize": 135168 00:21:10.520 } 00:21:10.520 } 00:21:10.520 ] 00:21:10.520 }, 00:21:10.520 { 00:21:10.520 "subsystem": "sock", 00:21:10.520 "config": [ 00:21:10.520 { 00:21:10.520 "method": "sock_impl_set_options", 00:21:10.520 "params": { 00:21:10.520 "impl_name": "posix", 00:21:10.520 "recv_buf_size": 2097152, 00:21:10.520 "send_buf_size": 2097152, 00:21:10.520 "enable_recv_pipe": true, 00:21:10.520 "enable_quickack": false, 00:21:10.520 "enable_placement_id": 0, 00:21:10.520 "enable_zerocopy_send_server": true, 00:21:10.520 "enable_zerocopy_send_client": false, 00:21:10.520 "zerocopy_threshold": 0, 00:21:10.520 "tls_version": 0, 00:21:10.520 "enable_ktls": false 00:21:10.520 } 00:21:10.520 }, 00:21:10.520 { 00:21:10.520 "method": "sock_impl_set_options", 00:21:10.520 "params": { 00:21:10.520 "impl_name": "ssl", 00:21:10.520 "recv_buf_size": 4096, 00:21:10.520 "send_buf_size": 4096, 00:21:10.520 "enable_recv_pipe": true, 00:21:10.520 "enable_quickack": false, 00:21:10.520 "enable_placement_id": 0, 00:21:10.520 "enable_zerocopy_send_server": true, 00:21:10.520 "enable_zerocopy_send_client": false, 00:21:10.520 "zerocopy_threshold": 0, 00:21:10.520 "tls_version": 0, 00:21:10.520 "enable_ktls": false 00:21:10.520 } 00:21:10.520 } 00:21:10.520 ] 00:21:10.520 }, 00:21:10.520 { 00:21:10.520 "subsystem": "vmd", 00:21:10.520 "config": [] 00:21:10.520 }, 00:21:10.520 { 00:21:10.520 "subsystem": "accel", 00:21:10.520 "config": [ 00:21:10.520 { 00:21:10.520 "method": "accel_set_options", 00:21:10.520 "params": { 00:21:10.520 "small_cache_size": 128, 00:21:10.520 "large_cache_size": 16, 00:21:10.520 "task_count": 2048, 00:21:10.520 "sequence_count": 2048, 00:21:10.520 "buf_count": 2048 00:21:10.520 } 00:21:10.520 } 00:21:10.520 ] 00:21:10.520 }, 00:21:10.520 { 00:21:10.520 "subsystem": "bdev", 00:21:10.520 "config": [ 00:21:10.520 { 00:21:10.520 "method": "bdev_set_options", 00:21:10.520 "params": { 00:21:10.520 "bdev_io_pool_size": 65535, 00:21:10.520 "bdev_io_cache_size": 256, 00:21:10.520 "bdev_auto_examine": true, 00:21:10.520 "iobuf_small_cache_size": 128, 00:21:10.520 "iobuf_large_cache_size": 16 00:21:10.520 } 00:21:10.520 }, 00:21:10.520 { 00:21:10.520 "method": "bdev_raid_set_options", 00:21:10.520 "params": { 00:21:10.520 "process_window_size_kb": 1024 00:21:10.520 } 00:21:10.520 }, 00:21:10.520 { 00:21:10.520 "method": "bdev_iscsi_set_options", 00:21:10.520 "params": { 00:21:10.520 "timeout_sec": 30 00:21:10.520 } 00:21:10.520 }, 00:21:10.520 { 00:21:10.520 "method": "bdev_nvme_set_options", 00:21:10.520 "params": { 00:21:10.520 "action_on_timeout": "none", 00:21:10.520 "timeout_us": 0, 00:21:10.520 "timeout_admin_us": 0, 00:21:10.520 "keep_alive_timeout_ms": 10000, 00:21:10.520 "arbitration_burst": 0, 00:21:10.520 "low_priority_weight": 0, 00:21:10.520 "medium_priority_weight": 0, 00:21:10.520 "high_priority_weight": 0, 00:21:10.520 "nvme_adminq_poll_period_us": 10000, 00:21:10.520 "nvme_ioq_poll_period_us": 0, 00:21:10.520 "io_queue_requests": 0, 00:21:10.520 "delay_cmd_submit": true, 00:21:10.520 "transport_retry_count": 4, 00:21:10.520 "bdev_retry_count": 3, 00:21:10.520 "transport_ack_timeout": 0, 00:21:10.520 "ctrlr_loss_timeout_sec": 0, 00:21:10.520 "reconnect_delay_sec": 0, 00:21:10.520 "fast_io_fail_timeout_sec": 0, 00:21:10.520 "disable_auto_failback": false, 00:21:10.520 "generate_uuids": false, 00:21:10.520 "transport_tos": 0, 00:21:10.520 "nvme_error_stat": false, 00:21:10.520 "rdma_srq_size": 0, 00:21:10.520 "io_path_stat": false, 00:21:10.520 "allow_accel_sequence": false, 00:21:10.520 "rdma_max_cq_size": 0, 00:21:10.520 "rdma_cm_event_timeout_ms": 0, 00:21:10.520 "dhchap_digests": [ 00:21:10.520 "sha256", 00:21:10.520 "sha384", 00:21:10.520 "sha512" 00:21:10.520 ], 00:21:10.520 "dhchap_dhgroups": [ 00:21:10.520 "null", 00:21:10.520 "ffdhe2048", 00:21:10.520 "ffdhe3072", 00:21:10.520 "ffdhe4096", 00:21:10.521 "ffdhe6144", 00:21:10.521 "ffdhe8192" 00:21:10.521 ] 00:21:10.521 } 00:21:10.521 }, 00:21:10.521 { 00:21:10.521 "method": "bdev_nvme_set_hotplug", 00:21:10.521 "params": { 00:21:10.521 "period_us": 100000, 00:21:10.521 "enable": false 00:21:10.521 } 00:21:10.521 }, 00:21:10.521 { 00:21:10.521 "method": "bdev_malloc_create", 00:21:10.521 "params": { 00:21:10.521 "name": "malloc0", 00:21:10.521 "num_blocks": 8192, 00:21:10.521 "block_size": 4096, 00:21:10.521 "physical_block_size": 4096, 00:21:10.521 "uuid": "659f3c5b-f328-4fe1-9616-5ad0485a5499", 00:21:10.521 "optimal_io_boundary": 0 00:21:10.521 } 00:21:10.521 }, 00:21:10.521 { 00:21:10.521 "method": "bdev_wait_for_examine" 00:21:10.521 } 00:21:10.521 ] 00:21:10.521 }, 00:21:10.521 { 00:21:10.521 "subsystem": "nbd", 00:21:10.521 "config": [] 00:21:10.521 }, 00:21:10.521 { 00:21:10.521 "subsystem": "scheduler", 00:21:10.521 "config": [ 00:21:10.521 { 00:21:10.521 "method": "framework_set_scheduler", 00:21:10.521 "params": { 00:21:10.521 "name": "static" 00:21:10.521 } 00:21:10.521 } 00:21:10.521 ] 00:21:10.521 }, 00:21:10.521 { 00:21:10.521 "subsystem": "nvmf", 00:21:10.521 "config": [ 00:21:10.521 { 00:21:10.521 "method": "nvmf_set_config", 00:21:10.521 "params": { 00:21:10.521 "discovery_filter": "match_any", 00:21:10.521 "admin_cmd_passthru": { 00:21:10.521 "identify_ctrlr": false 00:21:10.521 } 00:21:10.521 } 00:21:10.521 }, 00:21:10.521 { 00:21:10.521 "method": "nvmf_set_max_subsystems", 00:21:10.521 "params": { 00:21:10.521 "max_subsystems": 1024 00:21:10.521 } 00:21:10.521 }, 00:21:10.521 { 00:21:10.521 "method": "nvmf_set_crdt", 00:21:10.521 "params": { 00:21:10.521 "crdt1": 0, 00:21:10.521 "crdt2": 0, 00:21:10.521 "crdt3": 0 00:21:10.521 } 00:21:10.521 }, 00:21:10.521 { 00:21:10.521 "method": "nvmf_create_transport", 00:21:10.521 "params": { 00:21:10.521 "trtype": "TCP", 00:21:10.521 "max_queue_depth": 128, 00:21:10.521 "max_io_qpairs_per_ctrlr": 127, 00:21:10.521 "in_capsule_data_size": 4096, 00:21:10.521 "max_io_size": 131072, 00:21:10.521 "io_unit_size": 131072, 00:21:10.521 "max_aq_depth": 128, 00:21:10.521 "num_shared_buffers": 511, 00:21:10.521 "buf_cache_size": 4294967295, 00:21:10.521 "dif_insert_or_strip": false, 00:21:10.521 "zcopy": false, 00:21:10.521 "c2h_success": false, 00:21:10.521 "sock_priority": 0, 00:21:10.521 "abort_timeout_sec": 1, 00:21:10.521 "ack_timeout": 0 00:21:10.521 } 00:21:10.521 }, 00:21:10.521 { 00:21:10.521 "method": "nvmf_create_subsystem", 00:21:10.521 "params": { 00:21:10.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.521 "allow_any_host": false, 00:21:10.521 "serial_number": "SPDK00000000000001", 00:21:10.521 "model_number": "SPDK bdev Controller", 00:21:10.521 "max_namespaces": 10, 00:21:10.521 "min_cntlid": 1, 00:21:10.521 "max_cntlid": 65519, 00:21:10.521 "ana_reporting": false 00:21:10.521 } 00:21:10.521 }, 00:21:10.521 { 00:21:10.521 "method": "nvmf_subsystem_add_host", 00:21:10.521 "params": { 00:21:10.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.521 "host": "nqn.2016-06.io.spdk:host1", 00:21:10.521 "psk": "/tmp/tmp.m2FgGTfm5f" 00:21:10.521 } 00:21:10.521 }, 00:21:10.521 { 00:21:10.521 "method": "nvmf_subsystem_add_ns", 00:21:10.521 "params": { 00:21:10.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.521 "namespace": { 00:21:10.521 "nsid": 1, 00:21:10.521 "bdev_name": "malloc0", 00:21:10.521 "nguid": "659F3C5BF3284FE196165AD0485A5499", 00:21:10.521 "uuid": "659f3c5b-f328-4fe1-9616-5ad0485a5499", 00:21:10.521 "no_auto_visible": false 00:21:10.521 } 00:21:10.521 } 00:21:10.521 }, 00:21:10.521 { 00:21:10.521 "method": "nvmf_subsystem_add_listener", 00:21:10.521 "params": { 00:21:10.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.521 "listen_address": { 00:21:10.521 "trtype": "TCP", 00:21:10.521 "adrfam": "IPv4", 00:21:10.521 "traddr": "10.0.0.2", 00:21:10.521 "trsvcid": "4420" 00:21:10.521 }, 00:21:10.521 "secure_channel": true 00:21:10.521 } 00:21:10.521 } 00:21:10.521 ] 00:21:10.521 } 00:21:10.521 ] 00:21:10.521 }' 00:21:10.521 18:07:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:10.521 18:07:59 -- common/autotest_common.sh@10 -- # set +x 00:21:10.521 18:07:59 -- nvmf/common.sh@470 -- # nvmfpid=3350672 00:21:10.521 18:07:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:10.521 18:07:59 -- nvmf/common.sh@471 -- # waitforlisten 3350672 00:21:10.521 18:07:59 -- common/autotest_common.sh@817 -- # '[' -z 3350672 ']' 00:21:10.521 18:07:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.521 18:07:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:10.521 18:07:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.521 18:07:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:10.521 18:07:59 -- common/autotest_common.sh@10 -- # set +x 00:21:10.781 [2024-04-15 18:07:59.502288] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:10.781 [2024-04-15 18:07:59.502400] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.781 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.781 [2024-04-15 18:07:59.585797] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.781 [2024-04-15 18:07:59.681461] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.781 [2024-04-15 18:07:59.681534] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.781 [2024-04-15 18:07:59.681552] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.781 [2024-04-15 18:07:59.681567] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.781 [2024-04-15 18:07:59.681580] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.781 [2024-04-15 18:07:59.681673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.040 [2024-04-15 18:07:59.912724] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.040 [2024-04-15 18:07:59.928700] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:11.040 [2024-04-15 18:07:59.944731] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:11.040 [2024-04-15 18:07:59.955284] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.607 18:08:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:11.607 18:08:00 -- common/autotest_common.sh@850 -- # return 0 00:21:11.607 18:08:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:11.607 18:08:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:11.607 18:08:00 -- common/autotest_common.sh@10 -- # set +x 00:21:11.607 18:08:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.607 18:08:00 -- target/tls.sh@207 -- # bdevperf_pid=3350825 00:21:11.607 18:08:00 -- target/tls.sh@208 -- # waitforlisten 3350825 /var/tmp/bdevperf.sock 00:21:11.607 18:08:00 -- common/autotest_common.sh@817 -- # '[' -z 3350825 ']' 00:21:11.607 18:08:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.607 18:08:00 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:11.607 18:08:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:11.607 18:08:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.607 18:08:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:11.607 18:08:00 -- common/autotest_common.sh@10 -- # set +x 00:21:11.607 18:08:00 -- target/tls.sh@204 -- # echo '{ 00:21:11.607 "subsystems": [ 00:21:11.607 { 00:21:11.607 "subsystem": "keyring", 00:21:11.607 "config": [] 00:21:11.607 }, 00:21:11.607 { 00:21:11.607 "subsystem": "iobuf", 00:21:11.607 "config": [ 00:21:11.607 { 00:21:11.607 "method": "iobuf_set_options", 00:21:11.607 "params": { 00:21:11.607 "small_pool_count": 8192, 00:21:11.607 "large_pool_count": 1024, 00:21:11.607 "small_bufsize": 8192, 00:21:11.607 "large_bufsize": 135168 00:21:11.607 } 00:21:11.607 } 00:21:11.607 ] 00:21:11.607 }, 00:21:11.607 { 00:21:11.607 "subsystem": "sock", 00:21:11.607 "config": [ 00:21:11.607 { 00:21:11.607 "method": "sock_impl_set_options", 00:21:11.607 "params": { 00:21:11.607 "impl_name": "posix", 00:21:11.607 "recv_buf_size": 2097152, 00:21:11.607 "send_buf_size": 2097152, 00:21:11.607 "enable_recv_pipe": true, 00:21:11.607 "enable_quickack": false, 00:21:11.607 "enable_placement_id": 0, 00:21:11.607 "enable_zerocopy_send_server": true, 00:21:11.607 "enable_zerocopy_send_client": false, 00:21:11.607 "zerocopy_threshold": 0, 00:21:11.607 "tls_version": 0, 00:21:11.607 "enable_ktls": false 00:21:11.607 } 00:21:11.607 }, 00:21:11.607 { 00:21:11.607 "method": "sock_impl_set_options", 00:21:11.607 "params": { 00:21:11.607 "impl_name": "ssl", 00:21:11.607 "recv_buf_size": 4096, 00:21:11.607 "send_buf_size": 4096, 00:21:11.607 "enable_recv_pipe": true, 00:21:11.607 "enable_quickack": false, 00:21:11.607 "enable_placement_id": 0, 00:21:11.607 "enable_zerocopy_send_server": true, 00:21:11.607 "enable_zerocopy_send_client": false, 00:21:11.607 "zerocopy_threshold": 0, 00:21:11.608 "tls_version": 0, 00:21:11.608 "enable_ktls": false 00:21:11.608 } 00:21:11.608 } 00:21:11.608 ] 00:21:11.608 }, 00:21:11.608 { 00:21:11.608 "subsystem": "vmd", 00:21:11.608 "config": [] 00:21:11.608 }, 00:21:11.608 { 00:21:11.608 "subsystem": "accel", 00:21:11.608 "config": [ 00:21:11.608 { 00:21:11.608 "method": "accel_set_options", 00:21:11.608 "params": { 00:21:11.608 "small_cache_size": 128, 00:21:11.608 "large_cache_size": 16, 00:21:11.608 "task_count": 2048, 00:21:11.608 "sequence_count": 2048, 00:21:11.608 "buf_count": 2048 00:21:11.608 } 00:21:11.608 } 00:21:11.608 ] 00:21:11.608 }, 00:21:11.608 { 00:21:11.608 "subsystem": "bdev", 00:21:11.608 "config": [ 00:21:11.608 { 00:21:11.608 "method": "bdev_set_options", 00:21:11.608 "params": { 00:21:11.608 "bdev_io_pool_size": 65535, 00:21:11.608 "bdev_io_cache_size": 256, 00:21:11.608 "bdev_auto_examine": true, 00:21:11.608 "iobuf_small_cache_size": 128, 00:21:11.608 "iobuf_large_cache_size": 16 00:21:11.608 } 00:21:11.608 }, 00:21:11.608 { 00:21:11.608 "method": "bdev_raid_set_options", 00:21:11.608 "params": { 00:21:11.608 "process_window_size_kb": 1024 00:21:11.608 } 00:21:11.608 }, 00:21:11.608 { 00:21:11.608 "method": "bdev_iscsi_set_options", 00:21:11.608 "params": { 00:21:11.608 "timeout_sec": 30 00:21:11.608 } 00:21:11.608 }, 00:21:11.608 { 00:21:11.608 "method": "bdev_nvme_set_options", 00:21:11.608 "params": { 00:21:11.608 "action_on_timeout": "none", 00:21:11.608 "timeout_us": 0, 00:21:11.608 "timeout_admin_us": 0, 00:21:11.608 "keep_alive_timeout_ms": 10000, 00:21:11.608 "arbitration_burst": 0, 00:21:11.608 "low_priority_weight": 0, 00:21:11.608 "medium_priority_weight": 0, 00:21:11.608 "high_priority_weight": 0, 00:21:11.608 "nvme_adminq_poll_period_us": 10000, 00:21:11.608 "nvme_ioq_poll_period_us": 0, 00:21:11.608 "io_queue_requests": 512, 00:21:11.608 "delay_cmd_submit": true, 00:21:11.608 "transport_retry_count": 4, 00:21:11.608 "bdev_retry_count": 3, 00:21:11.608 "transport_ack_timeout": 0, 00:21:11.608 "ctrlr_loss_timeout_sec": 0, 00:21:11.608 "reconnect_delay_sec": 0, 00:21:11.608 "fast_io_fail_timeout_sec": 0, 00:21:11.608 "disable_auto_failback": false, 00:21:11.608 "generate_uuids": false, 00:21:11.608 "transport_tos": 0, 00:21:11.608 "nvme_error_stat": false, 00:21:11.608 "rdma_srq_size": 0, 00:21:11.608 "io_path_stat": false, 00:21:11.608 "allow_accel_sequence": false, 00:21:11.608 "rdma_max_cq_size": 0, 00:21:11.608 "rdma_cm_event_timeout_ms": 0, 00:21:11.608 "dhchap_digests": [ 00:21:11.608 "sha256", 00:21:11.608 "sha384", 00:21:11.608 "sha512" 00:21:11.608 ], 00:21:11.608 "dhchap_dhgroups": [ 00:21:11.608 "null", 00:21:11.608 "ffdhe2048", 00:21:11.608 "ffdhe3072", 00:21:11.608 "ffdhe4096", 00:21:11.608 "ffdhe6144", 00:21:11.608 "ffdhe8192" 00:21:11.608 ] 00:21:11.608 } 00:21:11.608 }, 00:21:11.608 { 00:21:11.608 "method": "bdev_nvme_attach_controller", 00:21:11.608 "params": { 00:21:11.608 "name": "TLSTEST", 00:21:11.608 "trtype": "TCP", 00:21:11.608 "adrfam": "IPv4", 00:21:11.608 "traddr": "10.0.0.2", 00:21:11.608 "trsvcid": "4420", 00:21:11.608 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:11.608 "prchk_reftag": false, 00:21:11.608 "prchk_guard": false, 00:21:11.608 "ctrlr_loss_timeout_sec": 0, 00:21:11.608 "reconnect_delay_sec": 0, 00:21:11.608 "fast_io_fail_timeout_sec": 0, 00:21:11.608 "psk": "/tmp/tmp.m2FgGTfm5f", 00:21:11.608 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:11.608 "hdgst": false, 00:21:11.608 "ddgst": false 00:21:11.608 } 00:21:11.608 }, 00:21:11.608 { 00:21:11.608 "method": "bdev_nvme_set_hotplug", 00:21:11.608 "params": { 00:21:11.608 "period_us": 100000, 00:21:11.608 "enable": false 00:21:11.608 } 00:21:11.608 }, 00:21:11.608 { 00:21:11.608 "method": "bdev_wait_for_examine" 00:21:11.608 } 00:21:11.608 ] 00:21:11.608 }, 00:21:11.608 { 00:21:11.608 "subsystem": "nbd", 00:21:11.608 "config": [] 00:21:11.608 } 00:21:11.608 ] 00:21:11.608 }' 00:21:11.866 [2024-04-15 18:08:00.600117] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:11.866 [2024-04-15 18:08:00.600205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350825 ] 00:21:11.866 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.866 [2024-04-15 18:08:00.668805] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.866 [2024-04-15 18:08:00.761710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.123 [2024-04-15 18:08:00.919378] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:12.123 [2024-04-15 18:08:00.919519] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:13.056 18:08:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:13.056 18:08:01 -- common/autotest_common.sh@850 -- # return 0 00:21:13.056 18:08:01 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:13.056 Running I/O for 10 seconds... 00:21:25.253 00:21:25.253 Latency(us) 00:21:25.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.253 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:25.253 Verification LBA range: start 0x0 length 0x2000 00:21:25.253 TLSTESTn1 : 10.05 2472.68 9.66 0.00 0.00 51638.98 7330.32 85827.89 00:21:25.253 =================================================================================================================== 00:21:25.253 Total : 2472.68 9.66 0.00 0.00 51638.98 7330.32 85827.89 00:21:25.253 0 00:21:25.253 18:08:12 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:25.253 18:08:12 -- target/tls.sh@214 -- # killprocess 3350825 00:21:25.253 18:08:12 -- common/autotest_common.sh@936 -- # '[' -z 3350825 ']' 00:21:25.253 18:08:12 -- common/autotest_common.sh@940 -- # kill -0 3350825 00:21:25.253 18:08:12 -- common/autotest_common.sh@941 -- # uname 00:21:25.253 18:08:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:25.253 18:08:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3350825 00:21:25.253 18:08:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:25.253 18:08:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:25.253 18:08:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3350825' 00:21:25.253 killing process with pid 3350825 00:21:25.253 18:08:12 -- common/autotest_common.sh@955 -- # kill 3350825 00:21:25.253 Received shutdown signal, test time was about 10.000000 seconds 00:21:25.253 00:21:25.253 Latency(us) 00:21:25.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.253 =================================================================================================================== 00:21:25.253 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.253 [2024-04-15 18:08:12.083246] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:25.253 18:08:12 -- common/autotest_common.sh@960 -- # wait 3350825 00:21:25.253 18:08:12 -- target/tls.sh@215 -- # killprocess 3350672 00:21:25.253 18:08:12 -- common/autotest_common.sh@936 -- # '[' -z 3350672 ']' 00:21:25.253 18:08:12 -- common/autotest_common.sh@940 -- # kill -0 3350672 00:21:25.253 18:08:12 -- common/autotest_common.sh@941 -- # uname 00:21:25.253 18:08:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:25.253 18:08:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3350672 00:21:25.253 18:08:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:25.253 18:08:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:25.253 18:08:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3350672' 00:21:25.253 killing process with pid 3350672 00:21:25.253 18:08:12 -- common/autotest_common.sh@955 -- # kill 3350672 00:21:25.253 [2024-04-15 18:08:12.339632] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:25.253 18:08:12 -- common/autotest_common.sh@960 -- # wait 3350672 00:21:25.253 18:08:12 -- target/tls.sh@218 -- # nvmfappstart 00:21:25.253 18:08:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:25.253 18:08:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:25.253 18:08:12 -- common/autotest_common.sh@10 -- # set +x 00:21:25.253 18:08:12 -- nvmf/common.sh@470 -- # nvmfpid=3352154 00:21:25.253 18:08:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:25.253 18:08:12 -- nvmf/common.sh@471 -- # waitforlisten 3352154 00:21:25.253 18:08:12 -- common/autotest_common.sh@817 -- # '[' -z 3352154 ']' 00:21:25.253 18:08:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.253 18:08:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:25.253 18:08:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.253 18:08:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:25.253 18:08:12 -- common/autotest_common.sh@10 -- # set +x 00:21:25.253 [2024-04-15 18:08:12.659020] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:25.253 [2024-04-15 18:08:12.659144] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.253 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.253 [2024-04-15 18:08:12.736371] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.253 [2024-04-15 18:08:12.833703] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.253 [2024-04-15 18:08:12.833767] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.253 [2024-04-15 18:08:12.833783] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.253 [2024-04-15 18:08:12.833799] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.253 [2024-04-15 18:08:12.833812] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.254 [2024-04-15 18:08:12.833847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.254 18:08:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:25.254 18:08:13 -- common/autotest_common.sh@850 -- # return 0 00:21:25.254 18:08:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:25.254 18:08:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:25.254 18:08:13 -- common/autotest_common.sh@10 -- # set +x 00:21:25.254 18:08:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.254 18:08:13 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.m2FgGTfm5f 00:21:25.254 18:08:13 -- target/tls.sh@49 -- # local key=/tmp/tmp.m2FgGTfm5f 00:21:25.254 18:08:13 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:25.254 [2024-04-15 18:08:13.328422] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.254 18:08:13 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:25.254 18:08:13 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:25.513 [2024-04-15 18:08:14.206793] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:25.513 [2024-04-15 18:08:14.207074] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.513 18:08:14 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:26.080 malloc0 00:21:26.080 18:08:14 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:26.338 18:08:15 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m2FgGTfm5f 00:21:26.597 [2024-04-15 18:08:15.313207] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:26.597 18:08:15 -- target/tls.sh@222 -- # bdevperf_pid=3352566 00:21:26.597 18:08:15 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:26.597 18:08:15 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:26.597 18:08:15 -- target/tls.sh@225 -- # waitforlisten 3352566 /var/tmp/bdevperf.sock 00:21:26.597 18:08:15 -- common/autotest_common.sh@817 -- # '[' -z 3352566 ']' 00:21:26.597 18:08:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.597 18:08:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:26.597 18:08:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.598 18:08:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:26.598 18:08:15 -- common/autotest_common.sh@10 -- # set +x 00:21:26.598 [2024-04-15 18:08:15.376320] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:26.598 [2024-04-15 18:08:15.376405] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3352566 ] 00:21:26.598 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.598 [2024-04-15 18:08:15.447736] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.598 [2024-04-15 18:08:15.545019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.856 18:08:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:26.856 18:08:15 -- common/autotest_common.sh@850 -- # return 0 00:21:26.856 18:08:15 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.m2FgGTfm5f 00:21:27.115 18:08:15 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:27.685 [2024-04-15 18:08:16.415507] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.685 nvme0n1 00:21:27.685 18:08:16 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:27.944 Running I/O for 1 seconds... 00:21:28.878 00:21:28.878 Latency(us) 00:21:28.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.878 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:28.878 Verification LBA range: start 0x0 length 0x2000 00:21:28.878 nvme0n1 : 1.04 2654.36 10.37 0.00 0.00 47324.95 7184.69 67963.26 00:21:28.878 =================================================================================================================== 00:21:28.878 Total : 2654.36 10.37 0.00 0.00 47324.95 7184.69 67963.26 00:21:28.878 0 00:21:28.878 18:08:17 -- target/tls.sh@234 -- # killprocess 3352566 00:21:28.878 18:08:17 -- common/autotest_common.sh@936 -- # '[' -z 3352566 ']' 00:21:28.878 18:08:17 -- common/autotest_common.sh@940 -- # kill -0 3352566 00:21:28.879 18:08:17 -- common/autotest_common.sh@941 -- # uname 00:21:28.879 18:08:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:28.879 18:08:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3352566 00:21:29.172 18:08:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:29.172 18:08:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:29.172 18:08:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3352566' 00:21:29.172 killing process with pid 3352566 00:21:29.172 18:08:17 -- common/autotest_common.sh@955 -- # kill 3352566 00:21:29.172 Received shutdown signal, test time was about 1.000000 seconds 00:21:29.172 00:21:29.172 Latency(us) 00:21:29.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.172 =================================================================================================================== 00:21:29.172 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.172 18:08:17 -- common/autotest_common.sh@960 -- # wait 3352566 00:21:29.172 18:08:18 -- target/tls.sh@235 -- # killprocess 3352154 00:21:29.172 18:08:18 -- common/autotest_common.sh@936 -- # '[' -z 3352154 ']' 00:21:29.172 18:08:18 -- common/autotest_common.sh@940 -- # kill -0 3352154 00:21:29.172 18:08:18 -- common/autotest_common.sh@941 -- # uname 00:21:29.172 18:08:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:29.172 18:08:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3352154 00:21:29.172 18:08:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:29.172 18:08:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:29.172 18:08:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3352154' 00:21:29.172 killing process with pid 3352154 00:21:29.172 18:08:18 -- common/autotest_common.sh@955 -- # kill 3352154 00:21:29.172 [2024-04-15 18:08:18.100827] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:29.172 18:08:18 -- common/autotest_common.sh@960 -- # wait 3352154 00:21:29.432 18:08:18 -- target/tls.sh@238 -- # nvmfappstart 00:21:29.432 18:08:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:29.432 18:08:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:29.432 18:08:18 -- common/autotest_common.sh@10 -- # set +x 00:21:29.432 18:08:18 -- nvmf/common.sh@470 -- # nvmfpid=3352860 00:21:29.432 18:08:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:29.432 18:08:18 -- nvmf/common.sh@471 -- # waitforlisten 3352860 00:21:29.432 18:08:18 -- common/autotest_common.sh@817 -- # '[' -z 3352860 ']' 00:21:29.432 18:08:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.432 18:08:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:29.432 18:08:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.432 18:08:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:29.432 18:08:18 -- common/autotest_common.sh@10 -- # set +x 00:21:29.692 [2024-04-15 18:08:18.419032] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:29.692 [2024-04-15 18:08:18.419146] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.692 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.692 [2024-04-15 18:08:18.507422] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.692 [2024-04-15 18:08:18.601913] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.692 [2024-04-15 18:08:18.601985] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.692 [2024-04-15 18:08:18.602004] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.692 [2024-04-15 18:08:18.602018] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.692 [2024-04-15 18:08:18.602030] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.692 [2024-04-15 18:08:18.602075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.951 18:08:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:29.951 18:08:18 -- common/autotest_common.sh@850 -- # return 0 00:21:29.951 18:08:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:29.951 18:08:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:29.951 18:08:18 -- common/autotest_common.sh@10 -- # set +x 00:21:29.951 18:08:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.951 18:08:18 -- target/tls.sh@239 -- # rpc_cmd 00:21:29.951 18:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.951 18:08:18 -- common/autotest_common.sh@10 -- # set +x 00:21:29.951 [2024-04-15 18:08:18.880469] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.951 malloc0 00:21:30.212 [2024-04-15 18:08:18.913319] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:30.212 [2024-04-15 18:08:18.913593] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.212 18:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.212 18:08:18 -- target/tls.sh@252 -- # bdevperf_pid=3352995 00:21:30.212 18:08:18 -- target/tls.sh@254 -- # waitforlisten 3352995 /var/tmp/bdevperf.sock 00:21:30.212 18:08:18 -- common/autotest_common.sh@817 -- # '[' -z 3352995 ']' 00:21:30.212 18:08:18 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:30.212 18:08:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.212 18:08:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:30.212 18:08:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.212 18:08:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:30.212 18:08:18 -- common/autotest_common.sh@10 -- # set +x 00:21:30.212 [2024-04-15 18:08:18.987313] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:30.212 [2024-04-15 18:08:18.987398] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3352995 ] 00:21:30.212 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.212 [2024-04-15 18:08:19.057067] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.212 [2024-04-15 18:08:19.152934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.473 18:08:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:30.473 18:08:19 -- common/autotest_common.sh@850 -- # return 0 00:21:30.473 18:08:19 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.m2FgGTfm5f 00:21:31.040 18:08:19 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:31.298 [2024-04-15 18:08:20.137240] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.298 nvme0n1 00:21:31.298 18:08:20 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:31.557 Running I/O for 1 seconds... 00:21:32.492 00:21:32.492 Latency(us) 00:21:32.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.492 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:32.492 Verification LBA range: start 0x0 length 0x2000 00:21:32.492 nvme0n1 : 1.05 2434.60 9.51 0.00 0.00 51384.26 6602.15 80002.47 00:21:32.492 =================================================================================================================== 00:21:32.492 Total : 2434.60 9.51 0.00 0.00 51384.26 6602.15 80002.47 00:21:32.492 0 00:21:32.492 18:08:21 -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:32.492 18:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.492 18:08:21 -- common/autotest_common.sh@10 -- # set +x 00:21:32.749 18:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.749 18:08:21 -- target/tls.sh@263 -- # tgtcfg='{ 00:21:32.749 "subsystems": [ 00:21:32.749 { 00:21:32.749 "subsystem": "keyring", 00:21:32.749 "config": [ 00:21:32.750 { 00:21:32.750 "method": "keyring_file_add_key", 00:21:32.750 "params": { 00:21:32.750 "name": "key0", 00:21:32.750 "path": "/tmp/tmp.m2FgGTfm5f" 00:21:32.750 } 00:21:32.750 } 00:21:32.750 ] 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "subsystem": "iobuf", 00:21:32.750 "config": [ 00:21:32.750 { 00:21:32.750 "method": "iobuf_set_options", 00:21:32.750 "params": { 00:21:32.750 "small_pool_count": 8192, 00:21:32.750 "large_pool_count": 1024, 00:21:32.750 "small_bufsize": 8192, 00:21:32.750 "large_bufsize": 135168 00:21:32.750 } 00:21:32.750 } 00:21:32.750 ] 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "subsystem": "sock", 00:21:32.750 "config": [ 00:21:32.750 { 00:21:32.750 "method": "sock_impl_set_options", 00:21:32.750 "params": { 00:21:32.750 "impl_name": "posix", 00:21:32.750 "recv_buf_size": 2097152, 00:21:32.750 "send_buf_size": 2097152, 00:21:32.750 "enable_recv_pipe": true, 00:21:32.750 "enable_quickack": false, 00:21:32.750 "enable_placement_id": 0, 00:21:32.750 "enable_zerocopy_send_server": true, 00:21:32.750 "enable_zerocopy_send_client": false, 00:21:32.750 "zerocopy_threshold": 0, 00:21:32.750 "tls_version": 0, 00:21:32.750 "enable_ktls": false 00:21:32.750 } 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "method": "sock_impl_set_options", 00:21:32.750 "params": { 00:21:32.750 "impl_name": "ssl", 00:21:32.750 "recv_buf_size": 4096, 00:21:32.750 "send_buf_size": 4096, 00:21:32.750 "enable_recv_pipe": true, 00:21:32.750 "enable_quickack": false, 00:21:32.750 "enable_placement_id": 0, 00:21:32.750 "enable_zerocopy_send_server": true, 00:21:32.750 "enable_zerocopy_send_client": false, 00:21:32.750 "zerocopy_threshold": 0, 00:21:32.750 "tls_version": 0, 00:21:32.750 "enable_ktls": false 00:21:32.750 } 00:21:32.750 } 00:21:32.750 ] 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "subsystem": "vmd", 00:21:32.750 "config": [] 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "subsystem": "accel", 00:21:32.750 "config": [ 00:21:32.750 { 00:21:32.750 "method": "accel_set_options", 00:21:32.750 "params": { 00:21:32.750 "small_cache_size": 128, 00:21:32.750 "large_cache_size": 16, 00:21:32.750 "task_count": 2048, 00:21:32.750 "sequence_count": 2048, 00:21:32.750 "buf_count": 2048 00:21:32.750 } 00:21:32.750 } 00:21:32.750 ] 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "subsystem": "bdev", 00:21:32.750 "config": [ 00:21:32.750 { 00:21:32.750 "method": "bdev_set_options", 00:21:32.750 "params": { 00:21:32.750 "bdev_io_pool_size": 65535, 00:21:32.750 "bdev_io_cache_size": 256, 00:21:32.750 "bdev_auto_examine": true, 00:21:32.750 "iobuf_small_cache_size": 128, 00:21:32.750 "iobuf_large_cache_size": 16 00:21:32.750 } 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "method": "bdev_raid_set_options", 00:21:32.750 "params": { 00:21:32.750 "process_window_size_kb": 1024 00:21:32.750 } 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "method": "bdev_iscsi_set_options", 00:21:32.750 "params": { 00:21:32.750 "timeout_sec": 30 00:21:32.750 } 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "method": "bdev_nvme_set_options", 00:21:32.750 "params": { 00:21:32.750 "action_on_timeout": "none", 00:21:32.750 "timeout_us": 0, 00:21:32.750 "timeout_admin_us": 0, 00:21:32.750 "keep_alive_timeout_ms": 10000, 00:21:32.750 "arbitration_burst": 0, 00:21:32.750 "low_priority_weight": 0, 00:21:32.750 "medium_priority_weight": 0, 00:21:32.750 "high_priority_weight": 0, 00:21:32.750 "nvme_adminq_poll_period_us": 10000, 00:21:32.750 "nvme_ioq_poll_period_us": 0, 00:21:32.750 "io_queue_requests": 0, 00:21:32.750 "delay_cmd_submit": true, 00:21:32.750 "transport_retry_count": 4, 00:21:32.750 "bdev_retry_count": 3, 00:21:32.750 "transport_ack_timeout": 0, 00:21:32.750 "ctrlr_loss_timeout_sec": 0, 00:21:32.750 "reconnect_delay_sec": 0, 00:21:32.750 "fast_io_fail_timeout_sec": 0, 00:21:32.750 "disable_auto_failback": false, 00:21:32.750 "generate_uuids": false, 00:21:32.750 "transport_tos": 0, 00:21:32.750 "nvme_error_stat": false, 00:21:32.750 "rdma_srq_size": 0, 00:21:32.750 "io_path_stat": false, 00:21:32.750 "allow_accel_sequence": false, 00:21:32.750 "rdma_max_cq_size": 0, 00:21:32.750 "rdma_cm_event_timeout_ms": 0, 00:21:32.750 "dhchap_digests": [ 00:21:32.750 "sha256", 00:21:32.750 "sha384", 00:21:32.750 "sha512" 00:21:32.750 ], 00:21:32.750 "dhchap_dhgroups": [ 00:21:32.750 "null", 00:21:32.750 "ffdhe2048", 00:21:32.750 "ffdhe3072", 00:21:32.750 "ffdhe4096", 00:21:32.750 "ffdhe6144", 00:21:32.750 "ffdhe8192" 00:21:32.750 ] 00:21:32.750 } 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "method": "bdev_nvme_set_hotplug", 00:21:32.750 "params": { 00:21:32.750 "period_us": 100000, 00:21:32.750 "enable": false 00:21:32.750 } 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "method": "bdev_malloc_create", 00:21:32.750 "params": { 00:21:32.750 "name": "malloc0", 00:21:32.750 "num_blocks": 8192, 00:21:32.750 "block_size": 4096, 00:21:32.750 "physical_block_size": 4096, 00:21:32.750 "uuid": "8a3c7c4e-d6cd-49c4-9a7a-6a9ef1a74f41", 00:21:32.750 "optimal_io_boundary": 0 00:21:32.750 } 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "method": "bdev_wait_for_examine" 00:21:32.750 } 00:21:32.750 ] 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "subsystem": "nbd", 00:21:32.750 "config": [] 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "subsystem": "scheduler", 00:21:32.750 "config": [ 00:21:32.750 { 00:21:32.750 "method": "framework_set_scheduler", 00:21:32.750 "params": { 00:21:32.750 "name": "static" 00:21:32.750 } 00:21:32.750 } 00:21:32.750 ] 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "subsystem": "nvmf", 00:21:32.750 "config": [ 00:21:32.750 { 00:21:32.750 "method": "nvmf_set_config", 00:21:32.750 "params": { 00:21:32.750 "discovery_filter": "match_any", 00:21:32.750 "admin_cmd_passthru": { 00:21:32.750 "identify_ctrlr": false 00:21:32.750 } 00:21:32.750 } 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "method": "nvmf_set_max_subsystems", 00:21:32.750 "params": { 00:21:32.750 "max_subsystems": 1024 00:21:32.750 } 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "method": "nvmf_set_crdt", 00:21:32.750 "params": { 00:21:32.750 "crdt1": 0, 00:21:32.750 "crdt2": 0, 00:21:32.750 "crdt3": 0 00:21:32.750 } 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "method": "nvmf_create_transport", 00:21:32.750 "params": { 00:21:32.750 "trtype": "TCP", 00:21:32.750 "max_queue_depth": 128, 00:21:32.750 "max_io_qpairs_per_ctrlr": 127, 00:21:32.750 "in_capsule_data_size": 4096, 00:21:32.750 "max_io_size": 131072, 00:21:32.750 "io_unit_size": 131072, 00:21:32.750 "max_aq_depth": 128, 00:21:32.750 "num_shared_buffers": 511, 00:21:32.750 "buf_cache_size": 4294967295, 00:21:32.750 "dif_insert_or_strip": false, 00:21:32.750 "zcopy": false, 00:21:32.750 "c2h_success": false, 00:21:32.750 "sock_priority": 0, 00:21:32.750 "abort_timeout_sec": 1, 00:21:32.750 "ack_timeout": 0 00:21:32.750 } 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "method": "nvmf_create_subsystem", 00:21:32.750 "params": { 00:21:32.750 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:32.750 "allow_any_host": false, 00:21:32.750 "serial_number": "00000000000000000000", 00:21:32.750 "model_number": "SPDK bdev Controller", 00:21:32.750 "max_namespaces": 32, 00:21:32.750 "min_cntlid": 1, 00:21:32.750 "max_cntlid": 65519, 00:21:32.750 "ana_reporting": false 00:21:32.750 } 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "method": "nvmf_subsystem_add_host", 00:21:32.750 "params": { 00:21:32.750 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:32.750 "host": "nqn.2016-06.io.spdk:host1", 00:21:32.750 "psk": "key0" 00:21:32.750 } 00:21:32.750 }, 00:21:32.750 { 00:21:32.750 "method": "nvmf_subsystem_add_ns", 00:21:32.750 "params": { 00:21:32.751 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:32.751 "namespace": { 00:21:32.751 "nsid": 1, 00:21:32.751 "bdev_name": "malloc0", 00:21:32.751 "nguid": "8A3C7C4ED6CD49C49A7A6A9EF1A74F41", 00:21:32.751 "uuid": "8a3c7c4e-d6cd-49c4-9a7a-6a9ef1a74f41", 00:21:32.751 "no_auto_visible": false 00:21:32.751 } 00:21:32.751 } 00:21:32.751 }, 00:21:32.751 { 00:21:32.751 "method": "nvmf_subsystem_add_listener", 00:21:32.751 "params": { 00:21:32.751 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:32.751 "listen_address": { 00:21:32.751 "trtype": "TCP", 00:21:32.751 "adrfam": "IPv4", 00:21:32.751 "traddr": "10.0.0.2", 00:21:32.751 "trsvcid": "4420" 00:21:32.751 }, 00:21:32.751 "secure_channel": true 00:21:32.751 } 00:21:32.751 } 00:21:32.751 ] 00:21:32.751 } 00:21:32.751 ] 00:21:32.751 }' 00:21:32.751 18:08:21 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:33.009 18:08:21 -- target/tls.sh@264 -- # bperfcfg='{ 00:21:33.009 "subsystems": [ 00:21:33.009 { 00:21:33.009 "subsystem": "keyring", 00:21:33.009 "config": [ 00:21:33.009 { 00:21:33.009 "method": "keyring_file_add_key", 00:21:33.009 "params": { 00:21:33.009 "name": "key0", 00:21:33.009 "path": "/tmp/tmp.m2FgGTfm5f" 00:21:33.009 } 00:21:33.009 } 00:21:33.009 ] 00:21:33.009 }, 00:21:33.009 { 00:21:33.009 "subsystem": "iobuf", 00:21:33.009 "config": [ 00:21:33.009 { 00:21:33.009 "method": "iobuf_set_options", 00:21:33.009 "params": { 00:21:33.009 "small_pool_count": 8192, 00:21:33.009 "large_pool_count": 1024, 00:21:33.009 "small_bufsize": 8192, 00:21:33.009 "large_bufsize": 135168 00:21:33.009 } 00:21:33.009 } 00:21:33.009 ] 00:21:33.009 }, 00:21:33.009 { 00:21:33.009 "subsystem": "sock", 00:21:33.009 "config": [ 00:21:33.009 { 00:21:33.009 "method": "sock_impl_set_options", 00:21:33.009 "params": { 00:21:33.009 "impl_name": "posix", 00:21:33.009 "recv_buf_size": 2097152, 00:21:33.009 "send_buf_size": 2097152, 00:21:33.009 "enable_recv_pipe": true, 00:21:33.009 "enable_quickack": false, 00:21:33.009 "enable_placement_id": 0, 00:21:33.009 "enable_zerocopy_send_server": true, 00:21:33.009 "enable_zerocopy_send_client": false, 00:21:33.009 "zerocopy_threshold": 0, 00:21:33.009 "tls_version": 0, 00:21:33.009 "enable_ktls": false 00:21:33.009 } 00:21:33.009 }, 00:21:33.009 { 00:21:33.009 "method": "sock_impl_set_options", 00:21:33.009 "params": { 00:21:33.009 "impl_name": "ssl", 00:21:33.009 "recv_buf_size": 4096, 00:21:33.009 "send_buf_size": 4096, 00:21:33.009 "enable_recv_pipe": true, 00:21:33.009 "enable_quickack": false, 00:21:33.009 "enable_placement_id": 0, 00:21:33.009 "enable_zerocopy_send_server": true, 00:21:33.009 "enable_zerocopy_send_client": false, 00:21:33.009 "zerocopy_threshold": 0, 00:21:33.009 "tls_version": 0, 00:21:33.009 "enable_ktls": false 00:21:33.009 } 00:21:33.009 } 00:21:33.009 ] 00:21:33.009 }, 00:21:33.009 { 00:21:33.009 "subsystem": "vmd", 00:21:33.009 "config": [] 00:21:33.009 }, 00:21:33.009 { 00:21:33.009 "subsystem": "accel", 00:21:33.009 "config": [ 00:21:33.009 { 00:21:33.009 "method": "accel_set_options", 00:21:33.009 "params": { 00:21:33.009 "small_cache_size": 128, 00:21:33.009 "large_cache_size": 16, 00:21:33.009 "task_count": 2048, 00:21:33.009 "sequence_count": 2048, 00:21:33.009 "buf_count": 2048 00:21:33.009 } 00:21:33.009 } 00:21:33.009 ] 00:21:33.009 }, 00:21:33.009 { 00:21:33.009 "subsystem": "bdev", 00:21:33.009 "config": [ 00:21:33.009 { 00:21:33.009 "method": "bdev_set_options", 00:21:33.009 "params": { 00:21:33.009 "bdev_io_pool_size": 65535, 00:21:33.009 "bdev_io_cache_size": 256, 00:21:33.009 "bdev_auto_examine": true, 00:21:33.009 "iobuf_small_cache_size": 128, 00:21:33.009 "iobuf_large_cache_size": 16 00:21:33.009 } 00:21:33.009 }, 00:21:33.009 { 00:21:33.009 "method": "bdev_raid_set_options", 00:21:33.009 "params": { 00:21:33.009 "process_window_size_kb": 1024 00:21:33.009 } 00:21:33.009 }, 00:21:33.009 { 00:21:33.009 "method": "bdev_iscsi_set_options", 00:21:33.009 "params": { 00:21:33.009 "timeout_sec": 30 00:21:33.009 } 00:21:33.009 }, 00:21:33.009 { 00:21:33.009 "method": "bdev_nvme_set_options", 00:21:33.009 "params": { 00:21:33.009 "action_on_timeout": "none", 00:21:33.009 "timeout_us": 0, 00:21:33.009 "timeout_admin_us": 0, 00:21:33.009 "keep_alive_timeout_ms": 10000, 00:21:33.009 "arbitration_burst": 0, 00:21:33.009 "low_priority_weight": 0, 00:21:33.009 "medium_priority_weight": 0, 00:21:33.009 "high_priority_weight": 0, 00:21:33.009 "nvme_adminq_poll_period_us": 10000, 00:21:33.009 "nvme_ioq_poll_period_us": 0, 00:21:33.009 "io_queue_requests": 512, 00:21:33.009 "delay_cmd_submit": true, 00:21:33.009 "transport_retry_count": 4, 00:21:33.009 "bdev_retry_count": 3, 00:21:33.009 "transport_ack_timeout": 0, 00:21:33.009 "ctrlr_loss_timeout_sec": 0, 00:21:33.009 "reconnect_delay_sec": 0, 00:21:33.009 "fast_io_fail_timeout_sec": 0, 00:21:33.009 "disable_auto_failback": false, 00:21:33.009 "generate_uuids": false, 00:21:33.009 "transport_tos": 0, 00:21:33.009 "nvme_error_stat": false, 00:21:33.009 "rdma_srq_size": 0, 00:21:33.009 "io_path_stat": false, 00:21:33.009 "allow_accel_sequence": false, 00:21:33.009 "rdma_max_cq_size": 0, 00:21:33.009 "rdma_cm_event_timeout_ms": 0, 00:21:33.009 "dhchap_digests": [ 00:21:33.009 "sha256", 00:21:33.009 "sha384", 00:21:33.009 "sha512" 00:21:33.009 ], 00:21:33.009 "dhchap_dhgroups": [ 00:21:33.009 "null", 00:21:33.009 "ffdhe2048", 00:21:33.009 "ffdhe3072", 00:21:33.009 "ffdhe4096", 00:21:33.009 "ffdhe6144", 00:21:33.009 "ffdhe8192" 00:21:33.009 ] 00:21:33.009 } 00:21:33.009 }, 00:21:33.009 { 00:21:33.009 "method": "bdev_nvme_attach_controller", 00:21:33.009 "params": { 00:21:33.009 "name": "nvme0", 00:21:33.009 "trtype": "TCP", 00:21:33.009 "adrfam": "IPv4", 00:21:33.009 "traddr": "10.0.0.2", 00:21:33.009 "trsvcid": "4420", 00:21:33.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.009 "prchk_reftag": false, 00:21:33.009 "prchk_guard": false, 00:21:33.009 "ctrlr_loss_timeout_sec": 0, 00:21:33.009 "reconnect_delay_sec": 0, 00:21:33.009 "fast_io_fail_timeout_sec": 0, 00:21:33.009 "psk": "key0", 00:21:33.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:33.009 "hdgst": false, 00:21:33.009 "ddgst": false 00:21:33.009 } 00:21:33.009 }, 00:21:33.009 { 00:21:33.009 "method": "bdev_nvme_set_hotplug", 00:21:33.009 "params": { 00:21:33.009 "period_us": 100000, 00:21:33.009 "enable": false 00:21:33.009 } 00:21:33.009 }, 00:21:33.009 { 00:21:33.009 "method": "bdev_enable_histogram", 00:21:33.009 "params": { 00:21:33.009 "name": "nvme0n1", 00:21:33.009 "enable": true 00:21:33.010 } 00:21:33.010 }, 00:21:33.010 { 00:21:33.010 "method": "bdev_wait_for_examine" 00:21:33.010 } 00:21:33.010 ] 00:21:33.010 }, 00:21:33.010 { 00:21:33.010 "subsystem": "nbd", 00:21:33.010 "config": [] 00:21:33.010 } 00:21:33.010 ] 00:21:33.010 }' 00:21:33.010 18:08:21 -- target/tls.sh@266 -- # killprocess 3352995 00:21:33.010 18:08:21 -- common/autotest_common.sh@936 -- # '[' -z 3352995 ']' 00:21:33.010 18:08:21 -- common/autotest_common.sh@940 -- # kill -0 3352995 00:21:33.010 18:08:21 -- common/autotest_common.sh@941 -- # uname 00:21:33.010 18:08:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:33.010 18:08:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3352995 00:21:33.270 18:08:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:33.270 18:08:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:33.270 18:08:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3352995' 00:21:33.270 killing process with pid 3352995 00:21:33.270 18:08:21 -- common/autotest_common.sh@955 -- # kill 3352995 00:21:33.270 Received shutdown signal, test time was about 1.000000 seconds 00:21:33.270 00:21:33.270 Latency(us) 00:21:33.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.270 =================================================================================================================== 00:21:33.270 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:33.270 18:08:21 -- common/autotest_common.sh@960 -- # wait 3352995 00:21:33.530 18:08:22 -- target/tls.sh@267 -- # killprocess 3352860 00:21:33.530 18:08:22 -- common/autotest_common.sh@936 -- # '[' -z 3352860 ']' 00:21:33.530 18:08:22 -- common/autotest_common.sh@940 -- # kill -0 3352860 00:21:33.530 18:08:22 -- common/autotest_common.sh@941 -- # uname 00:21:33.530 18:08:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:33.530 18:08:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3352860 00:21:33.530 18:08:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:33.530 18:08:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:33.530 18:08:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3352860' 00:21:33.530 killing process with pid 3352860 00:21:33.530 18:08:22 -- common/autotest_common.sh@955 -- # kill 3352860 00:21:33.530 18:08:22 -- common/autotest_common.sh@960 -- # wait 3352860 00:21:33.790 18:08:22 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:33.790 18:08:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:33.790 18:08:22 -- target/tls.sh@269 -- # echo '{ 00:21:33.790 "subsystems": [ 00:21:33.790 { 00:21:33.790 "subsystem": "keyring", 00:21:33.790 "config": [ 00:21:33.790 { 00:21:33.790 "method": "keyring_file_add_key", 00:21:33.790 "params": { 00:21:33.790 "name": "key0", 00:21:33.790 "path": "/tmp/tmp.m2FgGTfm5f" 00:21:33.790 } 00:21:33.790 } 00:21:33.790 ] 00:21:33.790 }, 00:21:33.790 { 00:21:33.790 "subsystem": "iobuf", 00:21:33.790 "config": [ 00:21:33.790 { 00:21:33.790 "method": "iobuf_set_options", 00:21:33.790 "params": { 00:21:33.790 "small_pool_count": 8192, 00:21:33.790 "large_pool_count": 1024, 00:21:33.790 "small_bufsize": 8192, 00:21:33.790 "large_bufsize": 135168 00:21:33.790 } 00:21:33.790 } 00:21:33.790 ] 00:21:33.790 }, 00:21:33.790 { 00:21:33.790 "subsystem": "sock", 00:21:33.790 "config": [ 00:21:33.790 { 00:21:33.790 "method": "sock_impl_set_options", 00:21:33.790 "params": { 00:21:33.790 "impl_name": "posix", 00:21:33.790 "recv_buf_size": 2097152, 00:21:33.790 "send_buf_size": 2097152, 00:21:33.790 "enable_recv_pipe": true, 00:21:33.790 "enable_quickack": false, 00:21:33.790 "enable_placement_id": 0, 00:21:33.790 "enable_zerocopy_send_server": true, 00:21:33.790 "enable_zerocopy_send_client": false, 00:21:33.790 "zerocopy_threshold": 0, 00:21:33.790 "tls_version": 0, 00:21:33.790 "enable_ktls": false 00:21:33.790 } 00:21:33.790 }, 00:21:33.790 { 00:21:33.790 "method": "sock_impl_set_options", 00:21:33.790 "params": { 00:21:33.790 "impl_name": "ssl", 00:21:33.790 "recv_buf_size": 4096, 00:21:33.790 "send_buf_size": 4096, 00:21:33.790 "enable_recv_pipe": true, 00:21:33.790 "enable_quickack": false, 00:21:33.790 "enable_placement_id": 0, 00:21:33.790 "enable_zerocopy_send_server": true, 00:21:33.790 "enable_zerocopy_send_client": false, 00:21:33.790 "zerocopy_threshold": 0, 00:21:33.790 "tls_version": 0, 00:21:33.790 "enable_ktls": false 00:21:33.790 } 00:21:33.790 } 00:21:33.790 ] 00:21:33.790 }, 00:21:33.790 { 00:21:33.790 "subsystem": "vmd", 00:21:33.790 "config": [] 00:21:33.790 }, 00:21:33.790 { 00:21:33.790 "subsystem": "accel", 00:21:33.790 "config": [ 00:21:33.790 { 00:21:33.790 "method": "accel_set_options", 00:21:33.790 "params": { 00:21:33.790 "small_cache_size": 128, 00:21:33.790 "large_cache_size": 16, 00:21:33.790 "task_count": 2048, 00:21:33.790 "sequence_count": 2048, 00:21:33.790 "buf_count": 2048 00:21:33.790 } 00:21:33.790 } 00:21:33.790 ] 00:21:33.790 }, 00:21:33.790 { 00:21:33.790 "subsystem": "bdev", 00:21:33.790 "config": [ 00:21:33.790 { 00:21:33.790 "method": "bdev_set_options", 00:21:33.790 "params": { 00:21:33.790 "bdev_io_pool_size": 65535, 00:21:33.790 "bdev_io_cache_size": 256, 00:21:33.790 "bdev_auto_examine": true, 00:21:33.790 "iobuf_small_cache_size": 128, 00:21:33.790 "iobuf_large_cache_size": 16 00:21:33.790 } 00:21:33.790 }, 00:21:33.790 { 00:21:33.790 "method": "bdev_raid_set_options", 00:21:33.790 "params": { 00:21:33.790 "process_window_size_kb": 1024 00:21:33.790 } 00:21:33.790 }, 00:21:33.790 { 00:21:33.790 "method": "bdev_iscsi_set_options", 00:21:33.790 "params": { 00:21:33.790 "timeout_sec": 30 00:21:33.790 } 00:21:33.790 }, 00:21:33.790 { 00:21:33.791 "method": "bdev_nvme_set_options", 00:21:33.791 "params": { 00:21:33.791 "action_on_timeout": "none", 00:21:33.791 "timeout_us": 0, 00:21:33.791 "timeout_admin_us": 0, 00:21:33.791 "keep_alive_timeout_ms": 10000, 00:21:33.791 "arbitration_burst": 0, 00:21:33.791 "low_priority_weight": 0, 00:21:33.791 "medium_priority_weight": 0, 00:21:33.791 "high_priority_weight": 0, 00:21:33.791 "nvme_adminq_poll_period_us": 10000, 00:21:33.791 "nvme_ioq_poll_period_us": 0, 00:21:33.791 "io_queue_requests": 0, 00:21:33.791 "delay_cmd_submit": true, 00:21:33.791 "transport_retry_count": 4, 00:21:33.791 "bdev_retry_count": 3, 00:21:33.791 "transport_ack_timeout": 0, 00:21:33.791 "ctrlr_loss_timeout_sec": 0, 00:21:33.791 "reconnect_delay_sec": 0, 00:21:33.791 "fast_io_fail_timeout_sec": 0, 00:21:33.791 "disable_auto_failback": false, 00:21:33.791 "generate_uuids": false, 00:21:33.791 "transport_tos": 0, 00:21:33.791 "nvme_error_stat": false, 00:21:33.791 "rdma_srq_size": 0, 00:21:33.791 "io_path_stat": false, 00:21:33.791 "allow_accel_sequence": false, 00:21:33.791 "rdma_max_cq_size": 0, 00:21:33.791 "rdma_cm_event_timeout_ms": 0, 00:21:33.791 "dhchap_digests": [ 00:21:33.791 "sha256", 00:21:33.791 "sha384", 00:21:33.791 "sha512" 00:21:33.791 ], 00:21:33.791 "dhchap_dhgroups": [ 00:21:33.791 "null", 00:21:33.791 "ffdhe2048", 00:21:33.791 "ffdhe3072", 00:21:33.791 "ffdhe4096", 00:21:33.791 "ffdhe6144", 00:21:33.791 "ffdhe8192" 00:21:33.791 ] 00:21:33.791 } 00:21:33.791 }, 00:21:33.791 { 00:21:33.791 "method": "bdev_nvme_set_hotplug", 00:21:33.791 "params": { 00:21:33.791 "period_us": 100000, 00:21:33.791 "enable": false 00:21:33.791 } 00:21:33.791 }, 00:21:33.791 { 00:21:33.791 "method": "bdev_malloc_create", 00:21:33.791 "params": { 00:21:33.791 "name": "malloc0", 00:21:33.791 "num_blocks": 8192, 00:21:33.791 "block_size": 4096, 00:21:33.791 "physical_block_size": 4096, 00:21:33.791 "uuid": "8a3c7c4e-d6cd-49c4-9a7a-6a9ef1a74f41", 00:21:33.791 "optimal_io_boundary": 0 00:21:33.791 } 00:21:33.791 }, 00:21:33.791 { 00:21:33.791 "method": "bdev_wait_for_examine" 00:21:33.791 } 00:21:33.791 ] 00:21:33.791 }, 00:21:33.791 { 00:21:33.791 "subsystem": "nbd", 00:21:33.791 "config": [] 00:21:33.791 }, 00:21:33.791 { 00:21:33.791 "subsystem": "scheduler", 00:21:33.791 "config": [ 00:21:33.791 { 00:21:33.791 "method": "framework_set_scheduler", 00:21:33.791 "params": { 00:21:33.791 "name": "static" 00:21:33.791 } 00:21:33.791 } 00:21:33.791 ] 00:21:33.791 }, 00:21:33.791 { 00:21:33.791 "subsystem": "nvmf", 00:21:33.791 "config": [ 00:21:33.791 { 00:21:33.791 "method": "nvmf_set_config", 00:21:33.791 "params": { 00:21:33.791 "discovery_filter": "match_any", 00:21:33.791 "admin_cmd_passthru": { 00:21:33.791 "identify_ctrlr": false 00:21:33.791 } 00:21:33.791 } 00:21:33.791 }, 00:21:33.791 { 00:21:33.791 "method": "nvmf_set_max_subsystems", 00:21:33.791 "params": { 00:21:33.791 "max_subsystems": 1024 00:21:33.791 } 00:21:33.791 }, 00:21:33.791 { 00:21:33.791 "method": "nvmf_set_crdt", 00:21:33.791 "params": { 00:21:33.791 "crdt1": 0, 00:21:33.791 "crdt2": 0, 00:21:33.791 "crdt3": 0 00:21:33.791 } 00:21:33.791 }, 00:21:33.791 { 00:21:33.791 "method": "nvmf_create_transport", 00:21:33.791 "params": { 00:21:33.791 "trtype": "TCP", 00:21:33.791 "max_queue_depth": 128, 00:21:33.791 "max_io_qpairs_per_ctrlr": 127, 00:21:33.791 "in_capsule_data_size": 4096, 00:21:33.791 "max_io_size": 131072, 00:21:33.791 "io_unit_size": 131072, 00:21:33.791 "max_aq_depth": 128, 00:21:33.791 "num_shared_buffers": 511, 00:21:33.791 "buf_cache_size": 4294967295, 00:21:33.791 "dif_insert_or_strip": false, 00:21:33.791 "zcopy": false, 00:21:33.791 "c2h_success": false, 00:21:33.791 "sock_priority": 0, 00:21:33.791 "abort_timeout_sec": 1, 00:21:33.791 "ack_timeout": 0 00:21:33.791 } 00:21:33.791 }, 00:21:33.791 { 00:21:33.791 "method": "nvmf_create_subsystem", 00:21:33.791 "params": { 00:21:33.791 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.791 "allow_any_host": false, 00:21:33.791 "serial_number": "00000000000000000000", 00:21:33.791 "model_number": "SPDK bdev Controller", 00:21:33.791 "max_namespaces": 32, 00:21:33.791 "min_cntlid": 1, 00:21:33.791 "max_cntlid": 65519, 00:21:33.791 "ana_reporting": false 00:21:33.791 } 00:21:33.791 }, 00:21:33.791 { 00:21:33.791 "method": "nvmf_subsystem_add_host", 00:21:33.791 "params": { 00:21:33.791 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.791 "host": "nqn.2016-06.io.spdk:host1", 00:21:33.791 "psk": "key0" 00:21:33.791 } 00:21:33.791 }, 00:21:33.791 { 00:21:33.791 "method": "nvmf_subsystem_add_ns", 00:21:33.791 "params": { 00:21:33.791 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.791 "namespace": { 00:21:33.791 "nsid": 1, 00:21:33.791 "bdev_name": "malloc0", 00:21:33.791 "nguid": "8A3C7C4ED6CD49C49A7A6A9EF1A74F41", 00:21:33.791 "uuid": "8a3c7c4e-d6cd-49c4-9a7a-6a9ef1a74f41", 00:21:33.791 "no_auto_visible": false 00:21:33.791 } 00:21:33.791 } 00:21:33.791 }, 00:21:33.791 { 00:21:33.791 "method": "nvmf_subsystem_add_listener", 00:21:33.791 "params": { 00:21:33.791 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.791 "listen_address": { 00:21:33.791 "trtype": "TCP", 00:21:33.791 "adrfam": "IPv4", 00:21:33.791 "traddr": "10.0.0.2", 00:21:33.791 "trsvcid": "4420" 00:21:33.791 }, 00:21:33.791 "secure_channel": true 00:21:33.791 } 00:21:33.791 } 00:21:33.791 ] 00:21:33.791 } 00:21:33.791 ] 00:21:33.791 }' 00:21:33.791 18:08:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:33.791 18:08:22 -- common/autotest_common.sh@10 -- # set +x 00:21:33.791 18:08:22 -- nvmf/common.sh@470 -- # nvmfpid=3353413 00:21:33.791 18:08:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:33.791 18:08:22 -- nvmf/common.sh@471 -- # waitforlisten 3353413 00:21:33.791 18:08:22 -- common/autotest_common.sh@817 -- # '[' -z 3353413 ']' 00:21:33.791 18:08:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.791 18:08:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:33.791 18:08:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.791 18:08:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:33.791 18:08:22 -- common/autotest_common.sh@10 -- # set +x 00:21:33.791 [2024-04-15 18:08:22.608359] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:33.791 [2024-04-15 18:08:22.608463] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.791 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.791 [2024-04-15 18:08:22.684703] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.051 [2024-04-15 18:08:22.778793] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.051 [2024-04-15 18:08:22.778857] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.051 [2024-04-15 18:08:22.778874] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.051 [2024-04-15 18:08:22.778889] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.051 [2024-04-15 18:08:22.778903] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.051 [2024-04-15 18:08:22.778997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.310 [2024-04-15 18:08:23.019490] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.310 [2024-04-15 18:08:23.051508] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:34.310 [2024-04-15 18:08:23.063261] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.310 18:08:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:34.310 18:08:23 -- common/autotest_common.sh@850 -- # return 0 00:21:34.310 18:08:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:34.310 18:08:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:34.310 18:08:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.310 18:08:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.310 18:08:23 -- target/tls.sh@272 -- # bdevperf_pid=3353504 00:21:34.310 18:08:23 -- target/tls.sh@273 -- # waitforlisten 3353504 /var/tmp/bdevperf.sock 00:21:34.310 18:08:23 -- common/autotest_common.sh@817 -- # '[' -z 3353504 ']' 00:21:34.310 18:08:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.310 18:08:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:34.310 18:08:23 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:34.310 18:08:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.310 18:08:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:34.310 18:08:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.310 18:08:23 -- target/tls.sh@270 -- # echo '{ 00:21:34.310 "subsystems": [ 00:21:34.310 { 00:21:34.310 "subsystem": "keyring", 00:21:34.310 "config": [ 00:21:34.310 { 00:21:34.310 "method": "keyring_file_add_key", 00:21:34.310 "params": { 00:21:34.310 "name": "key0", 00:21:34.310 "path": "/tmp/tmp.m2FgGTfm5f" 00:21:34.310 } 00:21:34.310 } 00:21:34.310 ] 00:21:34.310 }, 00:21:34.310 { 00:21:34.310 "subsystem": "iobuf", 00:21:34.310 "config": [ 00:21:34.310 { 00:21:34.310 "method": "iobuf_set_options", 00:21:34.310 "params": { 00:21:34.310 "small_pool_count": 8192, 00:21:34.310 "large_pool_count": 1024, 00:21:34.310 "small_bufsize": 8192, 00:21:34.310 "large_bufsize": 135168 00:21:34.310 } 00:21:34.310 } 00:21:34.310 ] 00:21:34.310 }, 00:21:34.310 { 00:21:34.310 "subsystem": "sock", 00:21:34.310 "config": [ 00:21:34.310 { 00:21:34.310 "method": "sock_impl_set_options", 00:21:34.310 "params": { 00:21:34.310 "impl_name": "posix", 00:21:34.310 "recv_buf_size": 2097152, 00:21:34.310 "send_buf_size": 2097152, 00:21:34.310 "enable_recv_pipe": true, 00:21:34.310 "enable_quickack": false, 00:21:34.310 "enable_placement_id": 0, 00:21:34.310 "enable_zerocopy_send_server": true, 00:21:34.310 "enable_zerocopy_send_client": false, 00:21:34.310 "zerocopy_threshold": 0, 00:21:34.310 "tls_version": 0, 00:21:34.310 "enable_ktls": false 00:21:34.310 } 00:21:34.310 }, 00:21:34.310 { 00:21:34.310 "method": "sock_impl_set_options", 00:21:34.310 "params": { 00:21:34.310 "impl_name": "ssl", 00:21:34.310 "recv_buf_size": 4096, 00:21:34.310 "send_buf_size": 4096, 00:21:34.310 "enable_recv_pipe": true, 00:21:34.310 "enable_quickack": false, 00:21:34.310 "enable_placement_id": 0, 00:21:34.310 "enable_zerocopy_send_server": true, 00:21:34.310 "enable_zerocopy_send_client": false, 00:21:34.310 "zerocopy_threshold": 0, 00:21:34.310 "tls_version": 0, 00:21:34.310 "enable_ktls": false 00:21:34.310 } 00:21:34.310 } 00:21:34.310 ] 00:21:34.310 }, 00:21:34.310 { 00:21:34.310 "subsystem": "vmd", 00:21:34.310 "config": [] 00:21:34.310 }, 00:21:34.310 { 00:21:34.310 "subsystem": "accel", 00:21:34.310 "config": [ 00:21:34.310 { 00:21:34.310 "method": "accel_set_options", 00:21:34.310 "params": { 00:21:34.310 "small_cache_size": 128, 00:21:34.310 "large_cache_size": 16, 00:21:34.310 "task_count": 2048, 00:21:34.310 "sequence_count": 2048, 00:21:34.310 "buf_count": 2048 00:21:34.310 } 00:21:34.310 } 00:21:34.310 ] 00:21:34.310 }, 00:21:34.310 { 00:21:34.310 "subsystem": "bdev", 00:21:34.310 "config": [ 00:21:34.310 { 00:21:34.310 "method": "bdev_set_options", 00:21:34.310 "params": { 00:21:34.310 "bdev_io_pool_size": 65535, 00:21:34.310 "bdev_io_cache_size": 256, 00:21:34.310 "bdev_auto_examine": true, 00:21:34.310 "iobuf_small_cache_size": 128, 00:21:34.310 "iobuf_large_cache_size": 16 00:21:34.310 } 00:21:34.310 }, 00:21:34.310 { 00:21:34.310 "method": "bdev_raid_set_options", 00:21:34.310 "params": { 00:21:34.310 "process_window_size_kb": 1024 00:21:34.310 } 00:21:34.310 }, 00:21:34.310 { 00:21:34.310 "method": "bdev_iscsi_set_options", 00:21:34.310 "params": { 00:21:34.310 "timeout_sec": 30 00:21:34.310 } 00:21:34.310 }, 00:21:34.310 { 00:21:34.310 "method": "bdev_nvme_set_options", 00:21:34.310 "params": { 00:21:34.310 "action_on_timeout": "none", 00:21:34.310 "timeout_us": 0, 00:21:34.310 "timeout_admin_us": 0, 00:21:34.310 "keep_alive_timeout_ms": 10000, 00:21:34.310 "arbitration_burst": 0, 00:21:34.310 "low_priority_weight": 0, 00:21:34.310 "medium_priority_weight": 0, 00:21:34.310 "high_priority_weight": 0, 00:21:34.310 "nvme_adminq_poll_period_us": 10000, 00:21:34.310 "nvme_ioq_poll_period_us": 0, 00:21:34.310 "io_queue_requests": 512, 00:21:34.310 "delay_cmd_submit": true, 00:21:34.310 "transport_retry_count": 4, 00:21:34.310 "bdev_retry_count": 3, 00:21:34.310 "transport_ack_timeout": 0, 00:21:34.310 "ctrlr_loss_timeout_sec": 0, 00:21:34.310 "reconnect_delay_sec": 0, 00:21:34.310 "fast_io_fail_timeout_sec": 0, 00:21:34.310 "disable_auto_failback": false, 00:21:34.310 "generate_uuids": false, 00:21:34.310 "transport_tos": 0, 00:21:34.310 "nvme_error_stat": false, 00:21:34.310 "rdma_srq_size": 0, 00:21:34.310 "io_path_stat": false, 00:21:34.310 "allow_accel_sequence": false, 00:21:34.310 "rdma_max_cq_size": 0, 00:21:34.310 "rdma_cm_event_timeout_ms": 0, 00:21:34.310 "dhchap_digests": [ 00:21:34.310 "sha256", 00:21:34.310 "sha384", 00:21:34.310 "sha512" 00:21:34.310 ], 00:21:34.310 "dhchap_dhgroups": [ 00:21:34.310 "null", 00:21:34.310 "ffdhe2048", 00:21:34.310 "ffdhe3072", 00:21:34.311 "ffdhe4096", 00:21:34.311 "ffdhe6144", 00:21:34.311 "ffdhe8192" 00:21:34.311 ] 00:21:34.311 } 00:21:34.311 }, 00:21:34.311 { 00:21:34.311 "method": "bdev_nvme_attach_controller", 00:21:34.311 "params": { 00:21:34.311 "name": "nvme0", 00:21:34.311 "trtype": "TCP", 00:21:34.311 "adrfam": "IPv4", 00:21:34.311 "traddr": "10.0.0.2", 00:21:34.311 "trsvcid": "4420", 00:21:34.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.311 "prchk_reftag": false, 00:21:34.311 "prchk_guard": false, 00:21:34.311 "ctrlr_loss_timeout_sec": 0, 00:21:34.311 "reconnect_delay_sec": 0, 00:21:34.311 "fast_io_fail_timeout_sec": 0, 00:21:34.311 "psk": "key0", 00:21:34.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.311 "hdgst": false, 00:21:34.311 "ddgst": false 00:21:34.311 } 00:21:34.311 }, 00:21:34.311 { 00:21:34.311 "method": "bdev_nvme_set_hotplug", 00:21:34.311 "params": { 00:21:34.311 "period_us": 100000, 00:21:34.311 "enable": false 00:21:34.311 } 00:21:34.311 }, 00:21:34.311 { 00:21:34.311 "method": "bdev_enable_histogram", 00:21:34.311 "params": { 00:21:34.311 "name": "nvme0n1", 00:21:34.311 "enable": true 00:21:34.311 } 00:21:34.311 }, 00:21:34.311 { 00:21:34.311 "method": "bdev_wait_for_examine" 00:21:34.311 } 00:21:34.311 ] 00:21:34.311 }, 00:21:34.311 { 00:21:34.311 "subsystem": "nbd", 00:21:34.311 "config": [] 00:21:34.311 } 00:21:34.311 ] 00:21:34.311 }' 00:21:34.311 [2024-04-15 18:08:23.159122] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:34.311 [2024-04-15 18:08:23.159203] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353504 ] 00:21:34.311 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.311 [2024-04-15 18:08:23.227518] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.570 [2024-04-15 18:08:23.318620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.570 [2024-04-15 18:08:23.492916] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:34.829 18:08:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:34.829 18:08:23 -- common/autotest_common.sh@850 -- # return 0 00:21:34.829 18:08:23 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:34.829 18:08:23 -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:35.088 18:08:23 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.088 18:08:23 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:35.347 Running I/O for 1 seconds... 00:21:36.732 00:21:36.732 Latency(us) 00:21:36.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.732 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:36.732 Verification LBA range: start 0x0 length 0x2000 00:21:36.732 nvme0n1 : 1.07 1541.87 6.02 0.00 0.00 80814.55 7136.14 104080.88 00:21:36.732 =================================================================================================================== 00:21:36.732 Total : 1541.87 6.02 0.00 0.00 80814.55 7136.14 104080.88 00:21:36.732 0 00:21:36.732 18:08:25 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:36.732 18:08:25 -- target/tls.sh@279 -- # cleanup 00:21:36.732 18:08:25 -- target/tls.sh@15 -- # process_shm --id 0 00:21:36.732 18:08:25 -- common/autotest_common.sh@794 -- # type=--id 00:21:36.732 18:08:25 -- common/autotest_common.sh@795 -- # id=0 00:21:36.732 18:08:25 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:21:36.732 18:08:25 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:36.732 18:08:25 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:21:36.732 18:08:25 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:21:36.732 18:08:25 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:21:36.732 18:08:25 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:36.732 nvmf_trace.0 00:21:36.732 18:08:25 -- common/autotest_common.sh@809 -- # return 0 00:21:36.732 18:08:25 -- target/tls.sh@16 -- # killprocess 3353504 00:21:36.732 18:08:25 -- common/autotest_common.sh@936 -- # '[' -z 3353504 ']' 00:21:36.732 18:08:25 -- common/autotest_common.sh@940 -- # kill -0 3353504 00:21:36.732 18:08:25 -- common/autotest_common.sh@941 -- # uname 00:21:36.732 18:08:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:36.732 18:08:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3353504 00:21:36.732 18:08:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:36.732 18:08:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:36.732 18:08:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3353504' 00:21:36.732 killing process with pid 3353504 00:21:36.732 18:08:25 -- common/autotest_common.sh@955 -- # kill 3353504 00:21:36.732 Received shutdown signal, test time was about 1.000000 seconds 00:21:36.732 00:21:36.732 Latency(us) 00:21:36.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.732 =================================================================================================================== 00:21:36.732 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:36.732 18:08:25 -- common/autotest_common.sh@960 -- # wait 3353504 00:21:36.732 18:08:25 -- target/tls.sh@17 -- # nvmftestfini 00:21:36.732 18:08:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:36.732 18:08:25 -- nvmf/common.sh@117 -- # sync 00:21:36.732 18:08:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:36.733 18:08:25 -- nvmf/common.sh@120 -- # set +e 00:21:36.733 18:08:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:36.733 18:08:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:36.733 rmmod nvme_tcp 00:21:36.733 rmmod nvme_fabrics 00:21:36.733 rmmod nvme_keyring 00:21:36.733 18:08:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:36.733 18:08:25 -- nvmf/common.sh@124 -- # set -e 00:21:36.733 18:08:25 -- nvmf/common.sh@125 -- # return 0 00:21:36.733 18:08:25 -- nvmf/common.sh@478 -- # '[' -n 3353413 ']' 00:21:36.733 18:08:25 -- nvmf/common.sh@479 -- # killprocess 3353413 00:21:36.733 18:08:25 -- common/autotest_common.sh@936 -- # '[' -z 3353413 ']' 00:21:36.733 18:08:25 -- common/autotest_common.sh@940 -- # kill -0 3353413 00:21:36.733 18:08:25 -- common/autotest_common.sh@941 -- # uname 00:21:36.733 18:08:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:36.733 18:08:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3353413 00:21:36.993 18:08:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:36.993 18:08:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:36.993 18:08:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3353413' 00:21:36.993 killing process with pid 3353413 00:21:36.993 18:08:25 -- common/autotest_common.sh@955 -- # kill 3353413 00:21:36.993 18:08:25 -- common/autotest_common.sh@960 -- # wait 3353413 00:21:37.251 18:08:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:37.251 18:08:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:37.251 18:08:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:37.251 18:08:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:37.251 18:08:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:37.251 18:08:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.251 18:08:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.251 18:08:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.161 18:08:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:39.161 18:08:27 -- target/tls.sh@18 -- # rm -f /tmp/tmp.XBG1NwBBl2 /tmp/tmp.Mbd291ZiPe /tmp/tmp.m2FgGTfm5f 00:21:39.161 00:21:39.161 real 1m29.083s 00:21:39.161 user 2m24.713s 00:21:39.161 sys 0m33.681s 00:21:39.161 18:08:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:39.161 18:08:28 -- common/autotest_common.sh@10 -- # set +x 00:21:39.161 ************************************ 00:21:39.161 END TEST nvmf_tls 00:21:39.161 ************************************ 00:21:39.161 18:08:28 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:39.161 18:08:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:39.161 18:08:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:39.161 18:08:28 -- common/autotest_common.sh@10 -- # set +x 00:21:39.420 ************************************ 00:21:39.420 START TEST nvmf_fips 00:21:39.420 ************************************ 00:21:39.420 18:08:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:39.420 * Looking for test storage... 00:21:39.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:39.420 18:08:28 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.420 18:08:28 -- nvmf/common.sh@7 -- # uname -s 00:21:39.420 18:08:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.420 18:08:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.420 18:08:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.420 18:08:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.420 18:08:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.420 18:08:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.420 18:08:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.420 18:08:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.420 18:08:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.420 18:08:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.420 18:08:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:39.420 18:08:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:39.420 18:08:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.420 18:08:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.420 18:08:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.420 18:08:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.420 18:08:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:39.420 18:08:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.420 18:08:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.420 18:08:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.420 18:08:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.420 18:08:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.420 18:08:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.420 18:08:28 -- paths/export.sh@5 -- # export PATH 00:21:39.420 18:08:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.420 18:08:28 -- nvmf/common.sh@47 -- # : 0 00:21:39.420 18:08:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:39.420 18:08:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:39.420 18:08:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.420 18:08:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.420 18:08:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.420 18:08:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:39.420 18:08:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:39.420 18:08:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:39.420 18:08:28 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:39.420 18:08:28 -- fips/fips.sh@89 -- # check_openssl_version 00:21:39.420 18:08:28 -- fips/fips.sh@83 -- # local target=3.0.0 00:21:39.420 18:08:28 -- fips/fips.sh@85 -- # openssl version 00:21:39.420 18:08:28 -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:39.420 18:08:28 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:39.420 18:08:28 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:39.420 18:08:28 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:39.420 18:08:28 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:39.420 18:08:28 -- scripts/common.sh@333 -- # IFS=.-: 00:21:39.420 18:08:28 -- scripts/common.sh@333 -- # read -ra ver1 00:21:39.420 18:08:28 -- scripts/common.sh@334 -- # IFS=.-: 00:21:39.420 18:08:28 -- scripts/common.sh@334 -- # read -ra ver2 00:21:39.420 18:08:28 -- scripts/common.sh@335 -- # local 'op=>=' 00:21:39.420 18:08:28 -- scripts/common.sh@337 -- # ver1_l=3 00:21:39.420 18:08:28 -- scripts/common.sh@338 -- # ver2_l=3 00:21:39.420 18:08:28 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:39.420 18:08:28 -- scripts/common.sh@341 -- # case "$op" in 00:21:39.420 18:08:28 -- scripts/common.sh@345 -- # : 1 00:21:39.420 18:08:28 -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:39.420 18:08:28 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:39.420 18:08:28 -- scripts/common.sh@362 -- # decimal 3 00:21:39.420 18:08:28 -- scripts/common.sh@350 -- # local d=3 00:21:39.420 18:08:28 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:39.420 18:08:28 -- scripts/common.sh@352 -- # echo 3 00:21:39.420 18:08:28 -- scripts/common.sh@362 -- # ver1[v]=3 00:21:39.420 18:08:28 -- scripts/common.sh@363 -- # decimal 3 00:21:39.420 18:08:28 -- scripts/common.sh@350 -- # local d=3 00:21:39.420 18:08:28 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:39.420 18:08:28 -- scripts/common.sh@352 -- # echo 3 00:21:39.420 18:08:28 -- scripts/common.sh@363 -- # ver2[v]=3 00:21:39.420 18:08:28 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:39.420 18:08:28 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:39.420 18:08:28 -- scripts/common.sh@361 -- # (( v++ )) 00:21:39.420 18:08:28 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:39.421 18:08:28 -- scripts/common.sh@362 -- # decimal 0 00:21:39.421 18:08:28 -- scripts/common.sh@350 -- # local d=0 00:21:39.421 18:08:28 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:39.421 18:08:28 -- scripts/common.sh@352 -- # echo 0 00:21:39.421 18:08:28 -- scripts/common.sh@362 -- # ver1[v]=0 00:21:39.421 18:08:28 -- scripts/common.sh@363 -- # decimal 0 00:21:39.421 18:08:28 -- scripts/common.sh@350 -- # local d=0 00:21:39.421 18:08:28 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:39.421 18:08:28 -- scripts/common.sh@352 -- # echo 0 00:21:39.421 18:08:28 -- scripts/common.sh@363 -- # ver2[v]=0 00:21:39.421 18:08:28 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:39.421 18:08:28 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:39.421 18:08:28 -- scripts/common.sh@361 -- # (( v++ )) 00:21:39.421 18:08:28 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:39.421 18:08:28 -- scripts/common.sh@362 -- # decimal 9 00:21:39.421 18:08:28 -- scripts/common.sh@350 -- # local d=9 00:21:39.421 18:08:28 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:39.421 18:08:28 -- scripts/common.sh@352 -- # echo 9 00:21:39.421 18:08:28 -- scripts/common.sh@362 -- # ver1[v]=9 00:21:39.421 18:08:28 -- scripts/common.sh@363 -- # decimal 0 00:21:39.421 18:08:28 -- scripts/common.sh@350 -- # local d=0 00:21:39.421 18:08:28 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:39.421 18:08:28 -- scripts/common.sh@352 -- # echo 0 00:21:39.421 18:08:28 -- scripts/common.sh@363 -- # ver2[v]=0 00:21:39.421 18:08:28 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:39.421 18:08:28 -- scripts/common.sh@364 -- # return 0 00:21:39.421 18:08:28 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:39.421 18:08:28 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:39.421 18:08:28 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:39.421 18:08:28 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:39.421 18:08:28 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:39.421 18:08:28 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:39.421 18:08:28 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:39.421 18:08:28 -- fips/fips.sh@113 -- # build_openssl_config 00:21:39.421 18:08:28 -- fips/fips.sh@37 -- # cat 00:21:39.421 18:08:28 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:39.421 18:08:28 -- fips/fips.sh@58 -- # cat - 00:21:39.421 18:08:28 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:39.421 18:08:28 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:39.421 18:08:28 -- fips/fips.sh@116 -- # mapfile -t providers 00:21:39.421 18:08:28 -- fips/fips.sh@116 -- # openssl list -providers 00:21:39.421 18:08:28 -- fips/fips.sh@116 -- # grep name 00:21:39.421 18:08:28 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:39.421 18:08:28 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:39.421 18:08:28 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:39.421 18:08:28 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:39.421 18:08:28 -- fips/fips.sh@127 -- # : 00:21:39.421 18:08:28 -- common/autotest_common.sh@638 -- # local es=0 00:21:39.421 18:08:28 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:39.421 18:08:28 -- common/autotest_common.sh@626 -- # local arg=openssl 00:21:39.421 18:08:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:39.421 18:08:28 -- common/autotest_common.sh@630 -- # type -t openssl 00:21:39.421 18:08:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:39.421 18:08:28 -- common/autotest_common.sh@632 -- # type -P openssl 00:21:39.421 18:08:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:39.421 18:08:28 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:21:39.421 18:08:28 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:21:39.421 18:08:28 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:21:39.681 Error setting digest 00:21:39.681 00E2C4A4807F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:39.681 00E2C4A4807F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:39.681 18:08:28 -- common/autotest_common.sh@641 -- # es=1 00:21:39.681 18:08:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:39.681 18:08:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:39.681 18:08:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:39.681 18:08:28 -- fips/fips.sh@130 -- # nvmftestinit 00:21:39.681 18:08:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:39.681 18:08:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.681 18:08:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:39.681 18:08:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:39.681 18:08:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:39.681 18:08:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.681 18:08:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.681 18:08:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.681 18:08:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:39.681 18:08:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:39.681 18:08:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:39.681 18:08:28 -- common/autotest_common.sh@10 -- # set +x 00:21:42.221 18:08:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:42.221 18:08:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:42.221 18:08:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:42.221 18:08:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:42.221 18:08:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:42.221 18:08:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:42.221 18:08:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:42.221 18:08:30 -- nvmf/common.sh@295 -- # net_devs=() 00:21:42.221 18:08:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:42.221 18:08:30 -- nvmf/common.sh@296 -- # e810=() 00:21:42.221 18:08:30 -- nvmf/common.sh@296 -- # local -ga e810 00:21:42.221 18:08:30 -- nvmf/common.sh@297 -- # x722=() 00:21:42.221 18:08:30 -- nvmf/common.sh@297 -- # local -ga x722 00:21:42.221 18:08:30 -- nvmf/common.sh@298 -- # mlx=() 00:21:42.221 18:08:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:42.221 18:08:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.221 18:08:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.221 18:08:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.221 18:08:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.221 18:08:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.221 18:08:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.221 18:08:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.221 18:08:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.221 18:08:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.221 18:08:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.221 18:08:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.222 18:08:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:42.222 18:08:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:42.222 18:08:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:42.222 18:08:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.222 18:08:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:42.222 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:42.222 18:08:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.222 18:08:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:42.222 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:42.222 18:08:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:42.222 18:08:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.222 18:08:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.222 18:08:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:42.222 18:08:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.222 18:08:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:42.222 Found net devices under 0000:84:00.0: cvl_0_0 00:21:42.222 18:08:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.222 18:08:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.222 18:08:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.222 18:08:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:42.222 18:08:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.222 18:08:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:42.222 Found net devices under 0000:84:00.1: cvl_0_1 00:21:42.222 18:08:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.222 18:08:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:42.222 18:08:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:42.222 18:08:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:42.222 18:08:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.222 18:08:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.222 18:08:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.222 18:08:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:42.222 18:08:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.222 18:08:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.222 18:08:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:42.222 18:08:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.222 18:08:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.222 18:08:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:42.222 18:08:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:42.222 18:08:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.222 18:08:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.222 18:08:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.222 18:08:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.222 18:08:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:42.222 18:08:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.222 18:08:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.222 18:08:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.222 18:08:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:42.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:21:42.222 00:21:42.222 --- 10.0.0.2 ping statistics --- 00:21:42.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.222 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:21:42.222 18:08:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:21:42.222 00:21:42.222 --- 10.0.0.1 ping statistics --- 00:21:42.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.222 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:21:42.222 18:08:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.222 18:08:30 -- nvmf/common.sh@411 -- # return 0 00:21:42.222 18:08:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:42.222 18:08:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.222 18:08:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:42.222 18:08:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.222 18:08:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:42.222 18:08:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:42.222 18:08:30 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:42.222 18:08:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:42.222 18:08:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:42.222 18:08:30 -- common/autotest_common.sh@10 -- # set +x 00:21:42.222 18:08:30 -- nvmf/common.sh@470 -- # nvmfpid=3355815 00:21:42.222 18:08:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:42.222 18:08:30 -- nvmf/common.sh@471 -- # waitforlisten 3355815 00:21:42.222 18:08:30 -- common/autotest_common.sh@817 -- # '[' -z 3355815 ']' 00:21:42.222 18:08:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.222 18:08:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:42.222 18:08:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.222 18:08:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:42.222 18:08:30 -- common/autotest_common.sh@10 -- # set +x 00:21:42.222 [2024-04-15 18:08:31.016978] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:42.222 [2024-04-15 18:08:31.017079] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.222 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.222 [2024-04-15 18:08:31.097067] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.482 [2024-04-15 18:08:31.193028] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.482 [2024-04-15 18:08:31.193109] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.482 [2024-04-15 18:08:31.193128] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.482 [2024-04-15 18:08:31.193142] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.482 [2024-04-15 18:08:31.193154] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.482 [2024-04-15 18:08:31.193185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.482 18:08:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:42.482 18:08:31 -- common/autotest_common.sh@850 -- # return 0 00:21:42.482 18:08:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:42.482 18:08:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:42.482 18:08:31 -- common/autotest_common.sh@10 -- # set +x 00:21:42.482 18:08:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.482 18:08:31 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:42.482 18:08:31 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:42.482 18:08:31 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:42.482 18:08:31 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:42.482 18:08:31 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:42.482 18:08:31 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:42.482 18:08:31 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:42.482 18:08:31 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:43.051 [2024-04-15 18:08:31.902029] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.051 [2024-04-15 18:08:31.918020] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:43.051 [2024-04-15 18:08:31.918258] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.051 [2024-04-15 18:08:31.950686] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:43.051 malloc0 00:21:43.051 18:08:31 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:43.051 18:08:31 -- fips/fips.sh@147 -- # bdevperf_pid=3355962 00:21:43.051 18:08:31 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:43.051 18:08:31 -- fips/fips.sh@148 -- # waitforlisten 3355962 /var/tmp/bdevperf.sock 00:21:43.051 18:08:31 -- common/autotest_common.sh@817 -- # '[' -z 3355962 ']' 00:21:43.051 18:08:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.051 18:08:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:43.051 18:08:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.051 18:08:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:43.051 18:08:31 -- common/autotest_common.sh@10 -- # set +x 00:21:43.309 [2024-04-15 18:08:32.050672] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:43.309 [2024-04-15 18:08:32.050762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3355962 ] 00:21:43.309 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.309 [2024-04-15 18:08:32.120220] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.309 [2024-04-15 18:08:32.214539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.569 18:08:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:43.569 18:08:32 -- common/autotest_common.sh@850 -- # return 0 00:21:43.569 18:08:32 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:43.853 [2024-04-15 18:08:32.685580] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:43.853 [2024-04-15 18:08:32.685733] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:43.853 TLSTESTn1 00:21:43.853 18:08:32 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:44.121 Running I/O for 10 seconds... 00:21:56.333 00:21:56.333 Latency(us) 00:21:56.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.333 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:56.333 Verification LBA range: start 0x0 length 0x2000 00:21:56.333 TLSTESTn1 : 10.05 2492.57 9.74 0.00 0.00 51218.25 9709.04 77672.30 00:21:56.333 =================================================================================================================== 00:21:56.333 Total : 2492.57 9.74 0.00 0.00 51218.25 9709.04 77672.30 00:21:56.333 0 00:21:56.333 18:08:43 -- fips/fips.sh@1 -- # cleanup 00:21:56.333 18:08:43 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:56.333 18:08:43 -- common/autotest_common.sh@794 -- # type=--id 00:21:56.333 18:08:43 -- common/autotest_common.sh@795 -- # id=0 00:21:56.333 18:08:43 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:21:56.333 18:08:43 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:56.333 18:08:43 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:21:56.333 18:08:43 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:21:56.333 18:08:43 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:21:56.333 18:08:43 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:56.333 nvmf_trace.0 00:21:56.333 18:08:43 -- common/autotest_common.sh@809 -- # return 0 00:21:56.333 18:08:43 -- fips/fips.sh@16 -- # killprocess 3355962 00:21:56.333 18:08:43 -- common/autotest_common.sh@936 -- # '[' -z 3355962 ']' 00:21:56.333 18:08:43 -- common/autotest_common.sh@940 -- # kill -0 3355962 00:21:56.333 18:08:43 -- common/autotest_common.sh@941 -- # uname 00:21:56.333 18:08:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:56.333 18:08:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3355962 00:21:56.333 18:08:43 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:56.333 18:08:43 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:56.333 18:08:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3355962' 00:21:56.333 killing process with pid 3355962 00:21:56.333 18:08:43 -- common/autotest_common.sh@955 -- # kill 3355962 00:21:56.333 Received shutdown signal, test time was about 10.000000 seconds 00:21:56.333 00:21:56.333 Latency(us) 00:21:56.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.333 =================================================================================================================== 00:21:56.333 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:56.333 [2024-04-15 18:08:43.228492] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:56.333 18:08:43 -- common/autotest_common.sh@960 -- # wait 3355962 00:21:56.333 18:08:43 -- fips/fips.sh@17 -- # nvmftestfini 00:21:56.333 18:08:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:56.333 18:08:43 -- nvmf/common.sh@117 -- # sync 00:21:56.333 18:08:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:56.333 18:08:43 -- nvmf/common.sh@120 -- # set +e 00:21:56.333 18:08:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:56.333 18:08:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:56.333 rmmod nvme_tcp 00:21:56.333 rmmod nvme_fabrics 00:21:56.333 rmmod nvme_keyring 00:21:56.333 18:08:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:56.333 18:08:43 -- nvmf/common.sh@124 -- # set -e 00:21:56.333 18:08:43 -- nvmf/common.sh@125 -- # return 0 00:21:56.333 18:08:43 -- nvmf/common.sh@478 -- # '[' -n 3355815 ']' 00:21:56.333 18:08:43 -- nvmf/common.sh@479 -- # killprocess 3355815 00:21:56.333 18:08:43 -- common/autotest_common.sh@936 -- # '[' -z 3355815 ']' 00:21:56.333 18:08:43 -- common/autotest_common.sh@940 -- # kill -0 3355815 00:21:56.333 18:08:43 -- common/autotest_common.sh@941 -- # uname 00:21:56.333 18:08:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:56.333 18:08:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3355815 00:21:56.333 18:08:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:56.333 18:08:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:56.333 18:08:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3355815' 00:21:56.333 killing process with pid 3355815 00:21:56.333 18:08:43 -- common/autotest_common.sh@955 -- # kill 3355815 00:21:56.333 [2024-04-15 18:08:43.565207] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:56.333 18:08:43 -- common/autotest_common.sh@960 -- # wait 3355815 00:21:56.333 18:08:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:56.333 18:08:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:56.333 18:08:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:56.333 18:08:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:56.333 18:08:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:56.333 18:08:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.333 18:08:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.333 18:08:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.269 18:08:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:57.269 18:08:45 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:57.269 00:21:57.269 real 0m17.729s 00:21:57.269 user 0m21.029s 00:21:57.269 sys 0m8.021s 00:21:57.269 18:08:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:57.269 18:08:45 -- common/autotest_common.sh@10 -- # set +x 00:21:57.269 ************************************ 00:21:57.269 END TEST nvmf_fips 00:21:57.269 ************************************ 00:21:57.269 18:08:45 -- nvmf/nvmf.sh@64 -- # '[' 1 -eq 1 ']' 00:21:57.269 18:08:45 -- nvmf/nvmf.sh@65 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:57.269 18:08:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:57.269 18:08:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:57.269 18:08:45 -- common/autotest_common.sh@10 -- # set +x 00:21:57.269 ************************************ 00:21:57.269 START TEST nvmf_fuzz 00:21:57.269 ************************************ 00:21:57.269 18:08:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:57.269 * Looking for test storage... 00:21:57.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:57.269 18:08:46 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.269 18:08:46 -- nvmf/common.sh@7 -- # uname -s 00:21:57.269 18:08:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.269 18:08:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.269 18:08:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.269 18:08:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.269 18:08:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.269 18:08:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.269 18:08:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.269 18:08:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.269 18:08:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.269 18:08:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.269 18:08:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:57.269 18:08:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:57.269 18:08:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.269 18:08:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.269 18:08:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.269 18:08:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.269 18:08:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.269 18:08:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.269 18:08:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.269 18:08:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.269 18:08:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.269 18:08:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.269 18:08:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.269 18:08:46 -- paths/export.sh@5 -- # export PATH 00:21:57.269 18:08:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.269 18:08:46 -- nvmf/common.sh@47 -- # : 0 00:21:57.269 18:08:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:57.269 18:08:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:57.269 18:08:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.269 18:08:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.269 18:08:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.269 18:08:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:57.269 18:08:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:57.269 18:08:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:57.269 18:08:46 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:57.269 18:08:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:57.269 18:08:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.269 18:08:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:57.269 18:08:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:57.269 18:08:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:57.269 18:08:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.269 18:08:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.269 18:08:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.269 18:08:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:57.269 18:08:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:57.269 18:08:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:57.269 18:08:46 -- common/autotest_common.sh@10 -- # set +x 00:21:59.802 18:08:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:59.802 18:08:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:59.802 18:08:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:59.802 18:08:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:59.802 18:08:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:59.802 18:08:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:59.802 18:08:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:59.802 18:08:48 -- nvmf/common.sh@295 -- # net_devs=() 00:21:59.802 18:08:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:59.802 18:08:48 -- nvmf/common.sh@296 -- # e810=() 00:21:59.802 18:08:48 -- nvmf/common.sh@296 -- # local -ga e810 00:21:59.802 18:08:48 -- nvmf/common.sh@297 -- # x722=() 00:21:59.802 18:08:48 -- nvmf/common.sh@297 -- # local -ga x722 00:21:59.802 18:08:48 -- nvmf/common.sh@298 -- # mlx=() 00:21:59.802 18:08:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:59.802 18:08:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.803 18:08:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.803 18:08:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.803 18:08:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.803 18:08:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.803 18:08:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.803 18:08:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.803 18:08:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.803 18:08:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.803 18:08:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.803 18:08:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.803 18:08:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:59.803 18:08:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:59.803 18:08:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:59.803 18:08:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.803 18:08:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:59.803 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:59.803 18:08:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.803 18:08:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:59.803 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:59.803 18:08:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:59.803 18:08:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.803 18:08:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.803 18:08:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:59.803 18:08:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.803 18:08:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:59.803 Found net devices under 0000:84:00.0: cvl_0_0 00:21:59.803 18:08:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.803 18:08:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.803 18:08:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.803 18:08:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:59.803 18:08:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.803 18:08:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:59.803 Found net devices under 0000:84:00.1: cvl_0_1 00:21:59.803 18:08:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.803 18:08:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:59.803 18:08:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:59.803 18:08:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:59.803 18:08:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.803 18:08:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.803 18:08:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.803 18:08:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:59.803 18:08:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:59.803 18:08:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:59.803 18:08:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:59.803 18:08:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:59.803 18:08:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.803 18:08:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:59.803 18:08:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:59.803 18:08:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:59.803 18:08:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:59.803 18:08:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:59.803 18:08:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:59.803 18:08:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:59.803 18:08:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:59.803 18:08:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:59.803 18:08:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:59.803 18:08:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:59.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:21:59.803 00:21:59.803 --- 10.0.0.2 ping statistics --- 00:21:59.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.803 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:21:59.803 18:08:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:59.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:21:59.803 00:21:59.803 --- 10.0.0.1 ping statistics --- 00:21:59.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.803 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:21:59.803 18:08:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.803 18:08:48 -- nvmf/common.sh@411 -- # return 0 00:21:59.803 18:08:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:59.803 18:08:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.803 18:08:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:59.803 18:08:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.803 18:08:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:59.803 18:08:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:59.803 18:08:48 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3359357 00:21:59.803 18:08:48 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:59.803 18:08:48 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:59.803 18:08:48 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3359357 00:21:59.803 18:08:48 -- common/autotest_common.sh@817 -- # '[' -z 3359357 ']' 00:21:59.803 18:08:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.803 18:08:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:59.803 18:08:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.803 18:08:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:59.803 18:08:48 -- common/autotest_common.sh@10 -- # set +x 00:22:00.062 18:08:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:00.062 18:08:48 -- common/autotest_common.sh@850 -- # return 0 00:22:00.062 18:08:48 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:00.062 18:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.062 18:08:48 -- common/autotest_common.sh@10 -- # set +x 00:22:00.062 18:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.062 18:08:48 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:00.062 18:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.062 18:08:48 -- common/autotest_common.sh@10 -- # set +x 00:22:00.320 Malloc0 00:22:00.320 18:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.320 18:08:49 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:00.320 18:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.320 18:08:49 -- common/autotest_common.sh@10 -- # set +x 00:22:00.320 18:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.320 18:08:49 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:00.320 18:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.320 18:08:49 -- common/autotest_common.sh@10 -- # set +x 00:22:00.320 18:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.320 18:08:49 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.320 18:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.320 18:08:49 -- common/autotest_common.sh@10 -- # set +x 00:22:00.320 18:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.320 18:08:49 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:22:00.320 18:08:49 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:22:32.399 Fuzzing completed. Shutting down the fuzz application 00:22:32.399 00:22:32.399 Dumping successful admin opcodes: 00:22:32.399 8, 9, 10, 24, 00:22:32.399 Dumping successful io opcodes: 00:22:32.399 0, 9, 00:22:32.399 NS: 0x200003aeff00 I/O qp, Total commands completed: 434450, total successful commands: 2540, random_seed: 3649000128 00:22:32.399 NS: 0x200003aeff00 admin qp, Total commands completed: 51552, total successful commands: 415, random_seed: 3864571648 00:22:32.399 18:09:20 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:22:32.657 Fuzzing completed. Shutting down the fuzz application 00:22:32.657 00:22:32.657 Dumping successful admin opcodes: 00:22:32.657 24, 00:22:32.657 Dumping successful io opcodes: 00:22:32.657 00:22:32.657 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1989832750 00:22:32.657 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1989952086 00:22:32.657 18:09:21 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:32.657 18:09:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.657 18:09:21 -- common/autotest_common.sh@10 -- # set +x 00:22:32.657 18:09:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.657 18:09:21 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:32.657 18:09:21 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:22:32.657 18:09:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:32.657 18:09:21 -- nvmf/common.sh@117 -- # sync 00:22:32.657 18:09:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:32.657 18:09:21 -- nvmf/common.sh@120 -- # set +e 00:22:32.657 18:09:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:32.657 18:09:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:32.657 rmmod nvme_tcp 00:22:32.657 rmmod nvme_fabrics 00:22:32.657 rmmod nvme_keyring 00:22:32.657 18:09:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:32.657 18:09:21 -- nvmf/common.sh@124 -- # set -e 00:22:32.657 18:09:21 -- nvmf/common.sh@125 -- # return 0 00:22:32.657 18:09:21 -- nvmf/common.sh@478 -- # '[' -n 3359357 ']' 00:22:32.658 18:09:21 -- nvmf/common.sh@479 -- # killprocess 3359357 00:22:32.658 18:09:21 -- common/autotest_common.sh@936 -- # '[' -z 3359357 ']' 00:22:32.658 18:09:21 -- common/autotest_common.sh@940 -- # kill -0 3359357 00:22:32.658 18:09:21 -- common/autotest_common.sh@941 -- # uname 00:22:32.916 18:09:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:32.916 18:09:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3359357 00:22:32.916 18:09:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:32.916 18:09:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:32.916 18:09:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3359357' 00:22:32.916 killing process with pid 3359357 00:22:32.916 18:09:21 -- common/autotest_common.sh@955 -- # kill 3359357 00:22:32.916 18:09:21 -- common/autotest_common.sh@960 -- # wait 3359357 00:22:33.174 18:09:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:33.174 18:09:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:33.174 18:09:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:33.174 18:09:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:33.174 18:09:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:33.174 18:09:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.174 18:09:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.174 18:09:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.106 18:09:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:35.106 18:09:23 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:22:35.106 00:22:35.106 real 0m37.982s 00:22:35.106 user 0m51.077s 00:22:35.106 sys 0m15.961s 00:22:35.106 18:09:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:35.106 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:22:35.106 ************************************ 00:22:35.107 END TEST nvmf_fuzz 00:22:35.107 ************************************ 00:22:35.107 18:09:24 -- nvmf/nvmf.sh@66 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:35.107 18:09:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:35.107 18:09:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:35.107 18:09:24 -- common/autotest_common.sh@10 -- # set +x 00:22:35.365 ************************************ 00:22:35.365 START TEST nvmf_multiconnection 00:22:35.365 ************************************ 00:22:35.365 18:09:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:35.365 * Looking for test storage... 00:22:35.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:35.365 18:09:24 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.365 18:09:24 -- nvmf/common.sh@7 -- # uname -s 00:22:35.365 18:09:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.365 18:09:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.365 18:09:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.365 18:09:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.365 18:09:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.365 18:09:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.365 18:09:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.365 18:09:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.365 18:09:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.365 18:09:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.365 18:09:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:35.365 18:09:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:35.365 18:09:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.365 18:09:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.365 18:09:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.365 18:09:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.365 18:09:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.365 18:09:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.365 18:09:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.365 18:09:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.365 18:09:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.365 18:09:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.365 18:09:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.365 18:09:24 -- paths/export.sh@5 -- # export PATH 00:22:35.365 18:09:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.365 18:09:24 -- nvmf/common.sh@47 -- # : 0 00:22:35.365 18:09:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:35.365 18:09:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:35.365 18:09:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.365 18:09:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.365 18:09:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.365 18:09:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:35.365 18:09:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:35.365 18:09:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:35.365 18:09:24 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:35.365 18:09:24 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:35.365 18:09:24 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:22:35.365 18:09:24 -- target/multiconnection.sh@16 -- # nvmftestinit 00:22:35.366 18:09:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:35.366 18:09:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.366 18:09:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:35.366 18:09:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:35.366 18:09:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:35.366 18:09:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.366 18:09:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.366 18:09:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.366 18:09:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:35.366 18:09:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:35.366 18:09:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:35.366 18:09:24 -- common/autotest_common.sh@10 -- # set +x 00:22:37.895 18:09:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:37.895 18:09:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:37.895 18:09:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:37.895 18:09:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:37.895 18:09:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:37.895 18:09:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:37.895 18:09:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:37.895 18:09:26 -- nvmf/common.sh@295 -- # net_devs=() 00:22:37.895 18:09:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:37.895 18:09:26 -- nvmf/common.sh@296 -- # e810=() 00:22:37.895 18:09:26 -- nvmf/common.sh@296 -- # local -ga e810 00:22:37.895 18:09:26 -- nvmf/common.sh@297 -- # x722=() 00:22:37.895 18:09:26 -- nvmf/common.sh@297 -- # local -ga x722 00:22:37.895 18:09:26 -- nvmf/common.sh@298 -- # mlx=() 00:22:37.895 18:09:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:37.896 18:09:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.896 18:09:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.896 18:09:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.896 18:09:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.896 18:09:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.896 18:09:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.896 18:09:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.896 18:09:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.896 18:09:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.896 18:09:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.896 18:09:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.896 18:09:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:37.896 18:09:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:37.896 18:09:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:37.896 18:09:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.896 18:09:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:37.896 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:37.896 18:09:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.896 18:09:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:37.896 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:37.896 18:09:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:37.896 18:09:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.896 18:09:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.896 18:09:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:37.896 18:09:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.896 18:09:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:37.896 Found net devices under 0000:84:00.0: cvl_0_0 00:22:37.896 18:09:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.896 18:09:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.896 18:09:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.896 18:09:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:37.896 18:09:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.896 18:09:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:37.896 Found net devices under 0000:84:00.1: cvl_0_1 00:22:37.896 18:09:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.896 18:09:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:37.896 18:09:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:37.896 18:09:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:37.896 18:09:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.896 18:09:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.896 18:09:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.896 18:09:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:37.896 18:09:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:37.896 18:09:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:37.896 18:09:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:37.896 18:09:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:37.896 18:09:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.896 18:09:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:37.896 18:09:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:37.896 18:09:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:37.896 18:09:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:37.896 18:09:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:37.896 18:09:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:37.896 18:09:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:37.896 18:09:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.896 18:09:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:37.896 18:09:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:37.896 18:09:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:37.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:22:37.896 00:22:37.896 --- 10.0.0.2 ping statistics --- 00:22:37.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.896 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:22:37.896 18:09:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:37.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:22:37.896 00:22:37.896 --- 10.0.0.1 ping statistics --- 00:22:37.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.896 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:22:37.896 18:09:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.896 18:09:26 -- nvmf/common.sh@411 -- # return 0 00:22:37.896 18:09:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:37.896 18:09:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.896 18:09:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:37.896 18:09:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.896 18:09:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:37.896 18:09:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:37.896 18:09:26 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:37.896 18:09:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:37.896 18:09:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:37.896 18:09:26 -- common/autotest_common.sh@10 -- # set +x 00:22:37.896 18:09:26 -- nvmf/common.sh@470 -- # nvmfpid=3365734 00:22:37.896 18:09:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:37.896 18:09:26 -- nvmf/common.sh@471 -- # waitforlisten 3365734 00:22:37.896 18:09:26 -- common/autotest_common.sh@817 -- # '[' -z 3365734 ']' 00:22:37.896 18:09:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.896 18:09:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:37.896 18:09:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.896 18:09:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:37.896 18:09:26 -- common/autotest_common.sh@10 -- # set +x 00:22:37.896 [2024-04-15 18:09:26.812840] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:22:37.896 [2024-04-15 18:09:26.813018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.155 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.155 [2024-04-15 18:09:26.933330] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.155 [2024-04-15 18:09:27.031878] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.155 [2024-04-15 18:09:27.031947] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.155 [2024-04-15 18:09:27.031964] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.155 [2024-04-15 18:09:27.031980] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.155 [2024-04-15 18:09:27.031992] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.155 [2024-04-15 18:09:27.032054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.155 [2024-04-15 18:09:27.032118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.155 [2024-04-15 18:09:27.032172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.155 [2024-04-15 18:09:27.032175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.413 18:09:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:38.413 18:09:27 -- common/autotest_common.sh@850 -- # return 0 00:22:38.413 18:09:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:38.413 18:09:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:38.413 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.413 18:09:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.413 18:09:27 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.413 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.413 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.413 [2024-04-15 18:09:27.307429] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.413 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.413 18:09:27 -- target/multiconnection.sh@21 -- # seq 1 11 00:22:38.413 18:09:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.413 18:09:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:38.413 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.413 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.413 Malloc1 00:22:38.413 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.413 18:09:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:38.413 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.413 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.413 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.413 18:09:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:38.413 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.413 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.413 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.413 18:09:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:38.413 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.413 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.413 [2024-04-15 18:09:27.364879] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.672 18:09:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.672 18:09:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:38.672 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.672 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 Malloc2 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.672 18:09:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:38.672 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.672 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.672 18:09:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:38.672 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.672 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.672 18:09:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:38.672 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.672 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.672 18:09:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.672 18:09:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:38.672 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.672 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 Malloc3 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.672 18:09:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:38.672 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.672 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.672 18:09:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:38.672 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.672 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.672 18:09:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:22:38.672 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.672 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.672 18:09:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.672 18:09:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:38.672 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.672 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 Malloc4 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.672 18:09:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:38.672 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.672 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.672 18:09:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:38.672 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.672 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.672 18:09:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:22:38.672 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.672 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.672 18:09:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.672 18:09:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:38.672 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.672 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 Malloc5 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.672 18:09:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:38.672 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.672 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.672 18:09:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:38.672 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.672 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.672 18:09:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:22:38.672 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.672 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.672 18:09:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.672 18:09:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:38.672 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.672 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 Malloc6 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.672 18:09:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:38.672 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.672 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.673 18:09:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:38.673 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.673 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.673 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.673 18:09:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:22:38.673 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.673 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.673 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.673 18:09:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.673 18:09:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:38.673 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.673 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.931 Malloc7 00:22:38.931 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.931 18:09:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:38.931 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.931 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.931 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.931 18:09:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:38.931 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.931 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.931 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.931 18:09:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:22:38.931 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.931 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.931 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.931 18:09:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.931 18:09:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:38.931 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.931 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.931 Malloc8 00:22:38.931 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.931 18:09:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:38.931 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.931 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.931 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.931 18:09:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:38.932 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.932 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.932 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.932 18:09:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:22:38.932 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.932 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.932 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.932 18:09:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.932 18:09:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:38.932 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.932 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.932 Malloc9 00:22:38.932 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.932 18:09:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:38.932 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.932 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.932 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.932 18:09:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:38.932 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.932 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.932 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.932 18:09:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:22:38.932 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.932 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.932 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.932 18:09:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.932 18:09:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:38.932 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.932 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.932 Malloc10 00:22:38.932 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.932 18:09:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:38.932 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.932 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.932 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.932 18:09:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:38.932 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.932 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.932 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.932 18:09:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:22:38.932 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.932 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.932 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.932 18:09:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.932 18:09:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:38.932 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.932 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.932 Malloc11 00:22:38.932 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.932 18:09:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:38.932 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.932 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.932 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.932 18:09:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:38.932 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.932 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.932 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.932 18:09:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:22:38.932 18:09:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.932 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.932 18:09:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.932 18:09:27 -- target/multiconnection.sh@28 -- # seq 1 11 00:22:38.932 18:09:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.932 18:09:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:39.497 18:09:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:39.497 18:09:28 -- common/autotest_common.sh@1184 -- # local i=0 00:22:39.497 18:09:28 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:39.497 18:09:28 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:39.497 18:09:28 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:42.024 18:09:30 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:42.024 18:09:30 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:42.024 18:09:30 -- common/autotest_common.sh@1193 -- # grep -c SPDK1 00:22:42.024 18:09:30 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:42.024 18:09:30 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:42.024 18:09:30 -- common/autotest_common.sh@1194 -- # return 0 00:22:42.024 18:09:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:42.024 18:09:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:22:42.282 18:09:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:42.282 18:09:31 -- common/autotest_common.sh@1184 -- # local i=0 00:22:42.282 18:09:31 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:42.282 18:09:31 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:42.282 18:09:31 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:44.180 18:09:33 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:44.180 18:09:33 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:44.180 18:09:33 -- common/autotest_common.sh@1193 -- # grep -c SPDK2 00:22:44.180 18:09:33 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:44.180 18:09:33 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:44.180 18:09:33 -- common/autotest_common.sh@1194 -- # return 0 00:22:44.180 18:09:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:44.180 18:09:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:22:45.112 18:09:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:45.112 18:09:33 -- common/autotest_common.sh@1184 -- # local i=0 00:22:45.112 18:09:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:45.112 18:09:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:45.112 18:09:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:47.010 18:09:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:47.010 18:09:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:47.010 18:09:35 -- common/autotest_common.sh@1193 -- # grep -c SPDK3 00:22:47.010 18:09:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:47.010 18:09:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:47.010 18:09:35 -- common/autotest_common.sh@1194 -- # return 0 00:22:47.010 18:09:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:47.010 18:09:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:22:47.635 18:09:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:47.635 18:09:36 -- common/autotest_common.sh@1184 -- # local i=0 00:22:47.635 18:09:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:47.635 18:09:36 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:47.635 18:09:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:50.174 18:09:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:50.174 18:09:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:50.174 18:09:38 -- common/autotest_common.sh@1193 -- # grep -c SPDK4 00:22:50.174 18:09:38 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:50.174 18:09:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:50.174 18:09:38 -- common/autotest_common.sh@1194 -- # return 0 00:22:50.174 18:09:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:50.174 18:09:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:50.431 18:09:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:50.431 18:09:39 -- common/autotest_common.sh@1184 -- # local i=0 00:22:50.431 18:09:39 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:50.431 18:09:39 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:50.431 18:09:39 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:52.956 18:09:41 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:52.956 18:09:41 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:52.956 18:09:41 -- common/autotest_common.sh@1193 -- # grep -c SPDK5 00:22:52.956 18:09:41 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:52.956 18:09:41 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:52.956 18:09:41 -- common/autotest_common.sh@1194 -- # return 0 00:22:52.956 18:09:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:52.956 18:09:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:53.214 18:09:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:53.214 18:09:41 -- common/autotest_common.sh@1184 -- # local i=0 00:22:53.214 18:09:41 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:53.214 18:09:41 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:53.214 18:09:41 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:55.118 18:09:43 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:55.118 18:09:43 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:55.118 18:09:43 -- common/autotest_common.sh@1193 -- # grep -c SPDK6 00:22:55.118 18:09:43 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:55.118 18:09:43 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:55.118 18:09:43 -- common/autotest_common.sh@1194 -- # return 0 00:22:55.118 18:09:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.118 18:09:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:56.051 18:09:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:56.051 18:09:44 -- common/autotest_common.sh@1184 -- # local i=0 00:22:56.051 18:09:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:56.051 18:09:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:56.051 18:09:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:57.949 18:09:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:57.949 18:09:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:57.949 18:09:46 -- common/autotest_common.sh@1193 -- # grep -c SPDK7 00:22:57.949 18:09:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:57.949 18:09:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:57.949 18:09:46 -- common/autotest_common.sh@1194 -- # return 0 00:22:57.949 18:09:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:57.949 18:09:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:58.883 18:09:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:58.883 18:09:47 -- common/autotest_common.sh@1184 -- # local i=0 00:22:58.883 18:09:47 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:58.883 18:09:47 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:58.883 18:09:47 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:00.780 18:09:49 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:00.780 18:09:49 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:00.780 18:09:49 -- common/autotest_common.sh@1193 -- # grep -c SPDK8 00:23:00.780 18:09:49 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:23:00.780 18:09:49 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:00.780 18:09:49 -- common/autotest_common.sh@1194 -- # return 0 00:23:00.780 18:09:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:00.780 18:09:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:23:01.711 18:09:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:01.711 18:09:50 -- common/autotest_common.sh@1184 -- # local i=0 00:23:01.711 18:09:50 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:01.711 18:09:50 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:23:01.711 18:09:50 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:03.606 18:09:52 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:03.606 18:09:52 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:03.606 18:09:52 -- common/autotest_common.sh@1193 -- # grep -c SPDK9 00:23:03.606 18:09:52 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:23:03.606 18:09:52 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:03.606 18:09:52 -- common/autotest_common.sh@1194 -- # return 0 00:23:03.606 18:09:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:03.606 18:09:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:23:04.170 18:09:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:04.170 18:09:53 -- common/autotest_common.sh@1184 -- # local i=0 00:23:04.170 18:09:53 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:04.170 18:09:53 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:23:04.170 18:09:53 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:06.694 18:09:55 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:06.694 18:09:55 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:06.694 18:09:55 -- common/autotest_common.sh@1193 -- # grep -c SPDK10 00:23:06.694 18:09:55 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:23:06.694 18:09:55 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:06.694 18:09:55 -- common/autotest_common.sh@1194 -- # return 0 00:23:06.694 18:09:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:06.694 18:09:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:23:07.258 18:09:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:23:07.258 18:09:56 -- common/autotest_common.sh@1184 -- # local i=0 00:23:07.258 18:09:56 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:07.258 18:09:56 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:23:07.258 18:09:56 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:09.154 18:09:58 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:09.154 18:09:58 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:09.154 18:09:58 -- common/autotest_common.sh@1193 -- # grep -c SPDK11 00:23:09.154 18:09:58 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:23:09.154 18:09:58 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:09.154 18:09:58 -- common/autotest_common.sh@1194 -- # return 0 00:23:09.154 18:09:58 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:09.412 [global] 00:23:09.412 thread=1 00:23:09.412 invalidate=1 00:23:09.412 rw=read 00:23:09.412 time_based=1 00:23:09.412 runtime=10 00:23:09.412 ioengine=libaio 00:23:09.412 direct=1 00:23:09.412 bs=262144 00:23:09.412 iodepth=64 00:23:09.412 norandommap=1 00:23:09.412 numjobs=1 00:23:09.412 00:23:09.412 [job0] 00:23:09.412 filename=/dev/nvme0n1 00:23:09.412 [job1] 00:23:09.412 filename=/dev/nvme10n1 00:23:09.412 [job2] 00:23:09.412 filename=/dev/nvme1n1 00:23:09.412 [job3] 00:23:09.412 filename=/dev/nvme2n1 00:23:09.412 [job4] 00:23:09.412 filename=/dev/nvme3n1 00:23:09.412 [job5] 00:23:09.412 filename=/dev/nvme4n1 00:23:09.412 [job6] 00:23:09.412 filename=/dev/nvme5n1 00:23:09.412 [job7] 00:23:09.412 filename=/dev/nvme6n1 00:23:09.412 [job8] 00:23:09.412 filename=/dev/nvme7n1 00:23:09.412 [job9] 00:23:09.412 filename=/dev/nvme8n1 00:23:09.412 [job10] 00:23:09.412 filename=/dev/nvme9n1 00:23:09.412 Could not set queue depth (nvme0n1) 00:23:09.412 Could not set queue depth (nvme10n1) 00:23:09.412 Could not set queue depth (nvme1n1) 00:23:09.412 Could not set queue depth (nvme2n1) 00:23:09.412 Could not set queue depth (nvme3n1) 00:23:09.412 Could not set queue depth (nvme4n1) 00:23:09.412 Could not set queue depth (nvme5n1) 00:23:09.412 Could not set queue depth (nvme6n1) 00:23:09.412 Could not set queue depth (nvme7n1) 00:23:09.412 Could not set queue depth (nvme8n1) 00:23:09.412 Could not set queue depth (nvme9n1) 00:23:09.669 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:09.669 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:09.669 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:09.669 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:09.669 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:09.669 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:09.669 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:09.669 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:09.669 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:09.669 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:09.669 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:09.669 fio-3.35 00:23:09.669 Starting 11 threads 00:23:21.903 00:23:21.903 job0: (groupid=0, jobs=1): err= 0: pid=3369823: Mon Apr 15 18:10:08 2024 00:23:21.903 read: IOPS=720, BW=180MiB/s (189MB/s)(1822MiB/10118msec) 00:23:21.903 slat (usec): min=10, max=62579, avg=847.87, stdev=3523.13 00:23:21.903 clat (usec): min=1442, max=266758, avg=87893.46, stdev=47653.58 00:23:21.903 lat (usec): min=1460, max=266778, avg=88741.33, stdev=48125.99 00:23:21.903 clat percentiles (msec): 00:23:21.903 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 27], 20.00th=[ 42], 00:23:21.903 | 30.00th=[ 63], 40.00th=[ 79], 50.00th=[ 88], 60.00th=[ 99], 00:23:21.903 | 70.00th=[ 108], 80.00th=[ 118], 90.00th=[ 148], 95.00th=[ 180], 00:23:21.904 | 99.00th=[ 220], 99.50th=[ 226], 99.90th=[ 239], 99.95th=[ 241], 00:23:21.904 | 99.99th=[ 268] 00:23:21.904 bw ( KiB/s): min=119808, max=308224, per=9.81%, avg=184875.60, stdev=49109.35, samples=20 00:23:21.904 iops : min= 468, max= 1204, avg=722.10, stdev=191.84, samples=20 00:23:21.904 lat (msec) : 2=0.07%, 4=0.49%, 10=2.26%, 20=4.28%, 50=17.18% 00:23:21.904 lat (msec) : 100=37.85%, 250=37.85%, 500=0.01% 00:23:21.904 cpu : usr=0.39%, sys=1.59%, ctx=1757, majf=0, minf=4097 00:23:21.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:23:21.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:21.904 issued rwts: total=7287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.904 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:21.904 job1: (groupid=0, jobs=1): err= 0: pid=3369824: Mon Apr 15 18:10:08 2024 00:23:21.904 read: IOPS=671, BW=168MiB/s (176MB/s)(1688MiB/10058msec) 00:23:21.904 slat (usec): min=9, max=140882, avg=1009.28, stdev=4551.33 00:23:21.904 clat (usec): min=1897, max=337046, avg=94213.28, stdev=57247.10 00:23:21.904 lat (usec): min=1920, max=337081, avg=95222.55, stdev=57782.35 00:23:21.904 clat percentiles (msec): 00:23:21.904 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 26], 20.00th=[ 35], 00:23:21.904 | 30.00th=[ 52], 40.00th=[ 78], 50.00th=[ 95], 60.00th=[ 107], 00:23:21.904 | 70.00th=[ 122], 80.00th=[ 144], 90.00th=[ 176], 95.00th=[ 201], 00:23:21.904 | 99.00th=[ 232], 99.50th=[ 239], 99.90th=[ 253], 99.95th=[ 275], 00:23:21.904 | 99.99th=[ 338] 00:23:21.904 bw ( KiB/s): min=99014, max=328192, per=9.08%, avg=171194.85, stdev=60030.36, samples=20 00:23:21.904 iops : min= 386, max= 1282, avg=668.65, stdev=234.53, samples=20 00:23:21.904 lat (msec) : 2=0.01%, 4=0.50%, 10=2.99%, 20=4.07%, 50=22.17% 00:23:21.904 lat (msec) : 100=25.10%, 250=45.02%, 500=0.12% 00:23:21.904 cpu : usr=0.27%, sys=1.64%, ctx=1738, majf=0, minf=4097 00:23:21.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:21.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:21.904 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.904 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:21.904 job2: (groupid=0, jobs=1): err= 0: pid=3369825: Mon Apr 15 18:10:08 2024 00:23:21.904 read: IOPS=634, BW=159MiB/s (166MB/s)(1604MiB/10117msec) 00:23:21.904 slat (usec): min=10, max=154819, avg=1117.14, stdev=4901.49 00:23:21.904 clat (usec): min=1575, max=361206, avg=99686.40, stdev=47442.92 00:23:21.904 lat (usec): min=1599, max=361225, avg=100803.53, stdev=47990.53 00:23:21.904 clat percentiles (msec): 00:23:21.904 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 44], 20.00th=[ 68], 00:23:21.904 | 30.00th=[ 78], 40.00th=[ 84], 50.00th=[ 92], 60.00th=[ 102], 00:23:21.904 | 70.00th=[ 114], 80.00th=[ 132], 90.00th=[ 174], 95.00th=[ 192], 00:23:21.904 | 99.00th=[ 228], 99.50th=[ 234], 99.90th=[ 243], 99.95th=[ 271], 00:23:21.904 | 99.99th=[ 363] 00:23:21.904 bw ( KiB/s): min=92160, max=249344, per=8.63%, avg=162597.70, stdev=39158.70, samples=20 00:23:21.904 iops : min= 360, max= 974, avg=635.05, stdev=152.96, samples=20 00:23:21.904 lat (msec) : 2=0.06%, 4=0.34%, 10=1.98%, 20=2.23%, 50=6.87% 00:23:21.904 lat (msec) : 100=46.95%, 250=41.49%, 500=0.08% 00:23:21.904 cpu : usr=0.36%, sys=1.61%, ctx=1483, majf=0, minf=4097 00:23:21.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:23:21.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:21.904 issued rwts: total=6416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.904 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:21.904 job3: (groupid=0, jobs=1): err= 0: pid=3369826: Mon Apr 15 18:10:08 2024 00:23:21.904 read: IOPS=685, BW=171MiB/s (180MB/s)(1724MiB/10056msec) 00:23:21.904 slat (usec): min=9, max=137396, avg=1002.40, stdev=4384.41 00:23:21.904 clat (msec): min=2, max=274, avg=92.22, stdev=51.86 00:23:21.904 lat (msec): min=2, max=299, avg=93.22, stdev=52.36 00:23:21.904 clat percentiles (msec): 00:23:21.904 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 32], 20.00th=[ 53], 00:23:21.904 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 90], 00:23:21.904 | 70.00th=[ 118], 80.00th=[ 142], 90.00th=[ 167], 95.00th=[ 192], 00:23:21.904 | 99.00th=[ 224], 99.50th=[ 228], 99.90th=[ 239], 99.95th=[ 241], 00:23:21.904 | 99.99th=[ 275] 00:23:21.904 bw ( KiB/s): min=97792, max=268800, per=9.28%, avg=174846.75, stdev=51491.87, samples=20 00:23:21.904 iops : min= 382, max= 1050, avg=682.90, stdev=201.21, samples=20 00:23:21.904 lat (msec) : 4=1.93%, 10=2.31%, 20=2.67%, 50=11.37%, 100=47.98% 00:23:21.904 lat (msec) : 250=33.73%, 500=0.01% 00:23:21.904 cpu : usr=0.35%, sys=1.71%, ctx=1640, majf=0, minf=4097 00:23:21.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:21.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:21.904 issued rwts: total=6895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.904 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:21.904 job4: (groupid=0, jobs=1): err= 0: pid=3369827: Mon Apr 15 18:10:08 2024 00:23:21.904 read: IOPS=637, BW=159MiB/s (167MB/s)(1614MiB/10121msec) 00:23:21.904 slat (usec): min=9, max=75970, avg=970.24, stdev=3862.00 00:23:21.904 clat (usec): min=903, max=258148, avg=99254.63, stdev=54158.28 00:23:21.904 lat (usec): min=921, max=258168, avg=100224.87, stdev=54610.09 00:23:21.904 clat percentiles (msec): 00:23:21.904 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 25], 20.00th=[ 55], 00:23:21.904 | 30.00th=[ 71], 40.00th=[ 82], 50.00th=[ 94], 60.00th=[ 110], 00:23:21.904 | 70.00th=[ 126], 80.00th=[ 144], 90.00th=[ 176], 95.00th=[ 194], 00:23:21.904 | 99.00th=[ 232], 99.50th=[ 241], 99.90th=[ 249], 99.95th=[ 251], 00:23:21.904 | 99.99th=[ 259] 00:23:21.904 bw ( KiB/s): min=87040, max=276480, per=8.68%, avg=163567.40, stdev=46909.65, samples=20 00:23:21.904 iops : min= 340, max= 1080, avg=638.85, stdev=183.27, samples=20 00:23:21.904 lat (usec) : 1000=0.06% 00:23:21.904 lat (msec) : 2=0.33%, 4=1.12%, 10=2.97%, 20=4.31%, 50=9.68% 00:23:21.904 lat (msec) : 100=35.79%, 250=45.65%, 500=0.09% 00:23:21.904 cpu : usr=0.30%, sys=1.68%, ctx=1641, majf=0, minf=4097 00:23:21.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:23:21.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:21.904 issued rwts: total=6454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.904 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:21.904 job5: (groupid=0, jobs=1): err= 0: pid=3369829: Mon Apr 15 18:10:08 2024 00:23:21.904 read: IOPS=617, BW=154MiB/s (162MB/s)(1553MiB/10057msec) 00:23:21.904 slat (usec): min=11, max=158047, avg=1228.48, stdev=5347.60 00:23:21.904 clat (usec): min=1769, max=305002, avg=102264.78, stdev=54144.52 00:23:21.904 lat (usec): min=1792, max=305038, avg=103493.26, stdev=54886.11 00:23:21.904 clat percentiles (msec): 00:23:21.904 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 32], 20.00th=[ 49], 00:23:21.904 | 30.00th=[ 69], 40.00th=[ 87], 50.00th=[ 101], 60.00th=[ 113], 00:23:21.904 | 70.00th=[ 130], 80.00th=[ 146], 90.00th=[ 178], 95.00th=[ 201], 00:23:21.904 | 99.00th=[ 236], 99.50th=[ 241], 99.90th=[ 251], 99.95th=[ 251], 00:23:21.904 | 99.99th=[ 305] 00:23:21.904 bw ( KiB/s): min=83456, max=301568, per=8.35%, avg=157398.60, stdev=60456.56, samples=20 00:23:21.904 iops : min= 326, max= 1178, avg=614.70, stdev=236.12, samples=20 00:23:21.904 lat (msec) : 2=0.03%, 4=0.35%, 10=1.51%, 20=1.63%, 50=17.26% 00:23:21.904 lat (msec) : 100=29.25%, 250=49.84%, 500=0.13% 00:23:21.904 cpu : usr=0.35%, sys=1.70%, ctx=1513, majf=0, minf=4097 00:23:21.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:21.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:21.904 issued rwts: total=6212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.904 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:21.904 job6: (groupid=0, jobs=1): err= 0: pid=3369830: Mon Apr 15 18:10:08 2024 00:23:21.904 read: IOPS=569, BW=142MiB/s (149MB/s)(1446MiB/10162msec) 00:23:21.904 slat (usec): min=11, max=185749, avg=1215.97, stdev=5395.75 00:23:21.904 clat (usec): min=864, max=415518, avg=111090.30, stdev=57218.92 00:23:21.904 lat (usec): min=883, max=415544, avg=112306.28, stdev=57965.34 00:23:21.904 clat percentiles (msec): 00:23:21.904 | 1.00th=[ 5], 5.00th=[ 22], 10.00th=[ 32], 20.00th=[ 50], 00:23:21.904 | 30.00th=[ 86], 40.00th=[ 101], 50.00th=[ 112], 60.00th=[ 125], 00:23:21.904 | 70.00th=[ 144], 80.00th=[ 163], 90.00th=[ 180], 95.00th=[ 199], 00:23:21.904 | 99.00th=[ 271], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 313], 00:23:21.904 | 99.99th=[ 418] 00:23:21.904 bw ( KiB/s): min=96256, max=258560, per=7.77%, avg=146387.05, stdev=39727.47, samples=20 00:23:21.904 iops : min= 376, max= 1010, avg=571.75, stdev=155.15, samples=20 00:23:21.904 lat (usec) : 1000=0.02% 00:23:21.904 lat (msec) : 2=0.36%, 4=0.47%, 10=1.85%, 20=2.09%, 50=15.47% 00:23:21.904 lat (msec) : 100=19.78%, 250=58.75%, 500=1.21% 00:23:21.904 cpu : usr=0.38%, sys=1.55%, ctx=1556, majf=0, minf=3721 00:23:21.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:21.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:21.904 issued rwts: total=5784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.904 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:21.904 job7: (groupid=0, jobs=1): err= 0: pid=3369831: Mon Apr 15 18:10:08 2024 00:23:21.904 read: IOPS=783, BW=196MiB/s (205MB/s)(1982MiB/10117msec) 00:23:21.904 slat (usec): min=10, max=115575, avg=852.01, stdev=3456.48 00:23:21.904 clat (usec): min=1222, max=244765, avg=80724.45, stdev=45285.25 00:23:21.904 lat (usec): min=1245, max=244806, avg=81576.47, stdev=45736.59 00:23:21.904 clat percentiles (msec): 00:23:21.904 | 1.00th=[ 7], 5.00th=[ 20], 10.00th=[ 29], 20.00th=[ 37], 00:23:21.904 | 30.00th=[ 49], 40.00th=[ 62], 50.00th=[ 74], 60.00th=[ 89], 00:23:21.904 | 70.00th=[ 108], 80.00th=[ 122], 90.00th=[ 142], 95.00th=[ 161], 00:23:21.904 | 99.00th=[ 192], 99.50th=[ 218], 99.90th=[ 239], 99.95th=[ 245], 00:23:21.904 | 99.99th=[ 245] 00:23:21.904 bw ( KiB/s): min=118272, max=356864, per=10.68%, avg=201252.80, stdev=74277.34, samples=20 00:23:21.904 iops : min= 462, max= 1394, avg=786.05, stdev=290.13, samples=20 00:23:21.905 lat (msec) : 2=0.06%, 4=0.33%, 10=1.25%, 20=3.56%, 50=25.72% 00:23:21.905 lat (msec) : 100=35.33%, 250=33.75% 00:23:21.905 cpu : usr=0.43%, sys=2.02%, ctx=1822, majf=0, minf=4097 00:23:21.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:21.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:21.905 issued rwts: total=7927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.905 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:21.905 job8: (groupid=0, jobs=1): err= 0: pid=3369838: Mon Apr 15 18:10:08 2024 00:23:21.905 read: IOPS=778, BW=195MiB/s (204MB/s)(1979MiB/10163msec) 00:23:21.905 slat (usec): min=9, max=100759, avg=587.01, stdev=3555.20 00:23:21.905 clat (usec): min=1285, max=263893, avg=81524.70, stdev=59049.05 00:23:21.905 lat (usec): min=1306, max=275738, avg=82111.70, stdev=59360.80 00:23:21.905 clat percentiles (msec): 00:23:21.905 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 17], 20.00th=[ 33], 00:23:21.905 | 30.00th=[ 42], 40.00th=[ 54], 50.00th=[ 69], 60.00th=[ 82], 00:23:21.905 | 70.00th=[ 103], 80.00th=[ 130], 90.00th=[ 176], 95.00th=[ 201], 00:23:21.905 | 99.00th=[ 249], 99.50th=[ 255], 99.90th=[ 262], 99.95th=[ 264], 00:23:21.905 | 99.99th=[ 264] 00:23:21.905 bw ( KiB/s): min=122368, max=389632, per=10.66%, avg=200901.50, stdev=61618.59, samples=20 00:23:21.905 iops : min= 478, max= 1522, avg=784.70, stdev=240.71, samples=20 00:23:21.905 lat (msec) : 2=0.78%, 4=1.29%, 10=3.83%, 20=6.91%, 50=25.23% 00:23:21.905 lat (msec) : 100=30.68%, 250=30.38%, 500=0.90% 00:23:21.905 cpu : usr=0.40%, sys=1.86%, ctx=2144, majf=0, minf=4097 00:23:21.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:21.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:21.905 issued rwts: total=7914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.905 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:21.905 job9: (groupid=0, jobs=1): err= 0: pid=3369839: Mon Apr 15 18:10:08 2024 00:23:21.905 read: IOPS=553, BW=138MiB/s (145MB/s)(1387MiB/10016msec) 00:23:21.905 slat (usec): min=10, max=91457, avg=1311.57, stdev=4858.15 00:23:21.905 clat (usec): min=1020, max=292464, avg=114135.16, stdev=52888.50 00:23:21.905 lat (usec): min=1041, max=292488, avg=115446.73, stdev=53775.74 00:23:21.905 clat percentiles (msec): 00:23:21.905 | 1.00th=[ 3], 5.00th=[ 16], 10.00th=[ 34], 20.00th=[ 78], 00:23:21.905 | 30.00th=[ 89], 40.00th=[ 104], 50.00th=[ 117], 60.00th=[ 128], 00:23:21.905 | 70.00th=[ 144], 80.00th=[ 165], 90.00th=[ 180], 95.00th=[ 192], 00:23:21.905 | 99.00th=[ 228], 99.50th=[ 232], 99.90th=[ 253], 99.95th=[ 292], 00:23:21.905 | 99.99th=[ 292] 00:23:21.905 bw ( KiB/s): min=81920, max=265216, per=7.45%, avg=140345.75, stdev=47924.32, samples=20 00:23:21.905 iops : min= 320, max= 1036, avg=548.15, stdev=187.17, samples=20 00:23:21.905 lat (msec) : 2=0.63%, 4=1.60%, 10=1.66%, 20=2.27%, 50=8.00% 00:23:21.905 lat (msec) : 100=23.76%, 250=61.96%, 500=0.11% 00:23:21.905 cpu : usr=0.36%, sys=1.54%, ctx=1531, majf=0, minf=4097 00:23:21.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:21.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:21.905 issued rwts: total=5547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.905 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:21.905 job10: (groupid=0, jobs=1): err= 0: pid=3369840: Mon Apr 15 18:10:08 2024 00:23:21.905 read: IOPS=755, BW=189MiB/s (198MB/s)(1912MiB/10124msec) 00:23:21.905 slat (usec): min=10, max=174149, avg=847.48, stdev=4299.89 00:23:21.905 clat (usec): min=1657, max=251895, avg=83799.92, stdev=56420.11 00:23:21.905 lat (usec): min=1681, max=276393, avg=84647.40, stdev=56919.04 00:23:21.905 clat percentiles (msec): 00:23:21.905 | 1.00th=[ 7], 5.00th=[ 12], 10.00th=[ 26], 20.00th=[ 34], 00:23:21.905 | 30.00th=[ 40], 40.00th=[ 56], 50.00th=[ 69], 60.00th=[ 86], 00:23:21.905 | 70.00th=[ 112], 80.00th=[ 140], 90.00th=[ 171], 95.00th=[ 190], 00:23:21.905 | 99.00th=[ 218], 99.50th=[ 228], 99.90th=[ 251], 99.95th=[ 251], 00:23:21.905 | 99.99th=[ 253] 00:23:21.905 bw ( KiB/s): min=93696, max=414720, per=10.29%, avg=194056.35, stdev=83612.63, samples=20 00:23:21.905 iops : min= 366, max= 1620, avg=757.90, stdev=326.63, samples=20 00:23:21.905 lat (msec) : 2=0.04%, 4=0.07%, 10=4.11%, 20=3.91%, 50=27.91% 00:23:21.905 lat (msec) : 100=31.49%, 250=32.30%, 500=0.17% 00:23:21.905 cpu : usr=0.30%, sys=1.87%, ctx=1803, majf=0, minf=4097 00:23:21.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:21.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:21.905 issued rwts: total=7646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.905 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:21.905 00:23:21.905 Run status group 0 (all jobs): 00:23:21.905 READ: bw=1841MiB/s (1930MB/s), 138MiB/s-196MiB/s (145MB/s-205MB/s), io=18.3GiB (19.6GB), run=10016-10163msec 00:23:21.905 00:23:21.905 Disk stats (read/write): 00:23:21.905 nvme0n1: ios=14340/0, merge=0/0, ticks=1232476/0, in_queue=1232476, util=96.64% 00:23:21.905 nvme10n1: ios=13234/0, merge=0/0, ticks=1225191/0, in_queue=1225191, util=96.91% 00:23:21.905 nvme1n1: ios=12492/0, merge=0/0, ticks=1232703/0, in_queue=1232703, util=97.23% 00:23:21.905 nvme2n1: ios=13550/0, merge=0/0, ticks=1232210/0, in_queue=1232210, util=97.45% 00:23:21.905 nvme3n1: ios=12666/0, merge=0/0, ticks=1230131/0, in_queue=1230131, util=97.54% 00:23:21.905 nvme4n1: ios=12177/0, merge=0/0, ticks=1224852/0, in_queue=1224852, util=97.95% 00:23:21.905 nvme5n1: ios=11566/0, merge=0/0, ticks=1260208/0, in_queue=1260208, util=98.22% 00:23:21.905 nvme6n1: ios=15629/0, merge=0/0, ticks=1232768/0, in_queue=1232768, util=98.31% 00:23:21.905 nvme7n1: ios=15827/0, merge=0/0, ticks=1272049/0, in_queue=1272049, util=98.83% 00:23:21.905 nvme8n1: ios=10648/0, merge=0/0, ticks=1225059/0, in_queue=1225059, util=99.04% 00:23:21.905 nvme9n1: ios=15037/0, merge=0/0, ticks=1230691/0, in_queue=1230691, util=99.19% 00:23:21.905 18:10:09 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:23:21.905 [global] 00:23:21.905 thread=1 00:23:21.905 invalidate=1 00:23:21.905 rw=randwrite 00:23:21.905 time_based=1 00:23:21.905 runtime=10 00:23:21.905 ioengine=libaio 00:23:21.905 direct=1 00:23:21.905 bs=262144 00:23:21.905 iodepth=64 00:23:21.905 norandommap=1 00:23:21.905 numjobs=1 00:23:21.905 00:23:21.905 [job0] 00:23:21.905 filename=/dev/nvme0n1 00:23:21.905 [job1] 00:23:21.905 filename=/dev/nvme10n1 00:23:21.905 [job2] 00:23:21.905 filename=/dev/nvme1n1 00:23:21.905 [job3] 00:23:21.905 filename=/dev/nvme2n1 00:23:21.905 [job4] 00:23:21.905 filename=/dev/nvme3n1 00:23:21.905 [job5] 00:23:21.905 filename=/dev/nvme4n1 00:23:21.905 [job6] 00:23:21.905 filename=/dev/nvme5n1 00:23:21.905 [job7] 00:23:21.905 filename=/dev/nvme6n1 00:23:21.905 [job8] 00:23:21.905 filename=/dev/nvme7n1 00:23:21.905 [job9] 00:23:21.905 filename=/dev/nvme8n1 00:23:21.905 [job10] 00:23:21.905 filename=/dev/nvme9n1 00:23:21.905 Could not set queue depth (nvme0n1) 00:23:21.905 Could not set queue depth (nvme10n1) 00:23:21.905 Could not set queue depth (nvme1n1) 00:23:21.905 Could not set queue depth (nvme2n1) 00:23:21.905 Could not set queue depth (nvme3n1) 00:23:21.905 Could not set queue depth (nvme4n1) 00:23:21.905 Could not set queue depth (nvme5n1) 00:23:21.905 Could not set queue depth (nvme6n1) 00:23:21.905 Could not set queue depth (nvme7n1) 00:23:21.905 Could not set queue depth (nvme8n1) 00:23:21.905 Could not set queue depth (nvme9n1) 00:23:21.905 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.905 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.905 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.905 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.905 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.905 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.905 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.905 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.905 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.905 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.905 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.905 fio-3.35 00:23:21.905 Starting 11 threads 00:23:31.492 00:23:31.492 job0: (groupid=0, jobs=1): err= 0: pid=3371003: Mon Apr 15 18:10:20 2024 00:23:31.492 write: IOPS=474, BW=119MiB/s (124MB/s)(1224MiB/10320msec); 0 zone resets 00:23:31.492 slat (usec): min=22, max=143798, avg=1158.76, stdev=4775.31 00:23:31.492 clat (usec): min=1922, max=684557, avg=133651.26, stdev=95177.84 00:23:31.492 lat (usec): min=1989, max=684591, avg=134810.02, stdev=96160.63 00:23:31.492 clat percentiles (msec): 00:23:31.492 | 1.00th=[ 7], 5.00th=[ 13], 10.00th=[ 24], 20.00th=[ 50], 00:23:31.492 | 30.00th=[ 74], 40.00th=[ 95], 50.00th=[ 118], 60.00th=[ 140], 00:23:31.492 | 70.00th=[ 178], 80.00th=[ 207], 90.00th=[ 259], 95.00th=[ 313], 00:23:31.492 | 99.00th=[ 388], 99.50th=[ 493], 99.90th=[ 684], 99.95th=[ 684], 00:23:31.492 | 99.99th=[ 684] 00:23:31.492 bw ( KiB/s): min=44032, max=254978, per=8.52%, avg=123632.50, stdev=51821.94, samples=20 00:23:31.492 iops : min= 172, max= 996, avg=482.90, stdev=202.40, samples=20 00:23:31.492 lat (msec) : 2=0.04%, 4=0.22%, 10=3.29%, 20=4.97%, 50=11.89% 00:23:31.492 lat (msec) : 100=21.31%, 250=47.38%, 500=10.44%, 750=0.45% 00:23:31.492 cpu : usr=1.83%, sys=1.28%, ctx=3310, majf=0, minf=1 00:23:31.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:31.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:31.492 issued rwts: total=0,4894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.492 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:31.492 job1: (groupid=0, jobs=1): err= 0: pid=3371011: Mon Apr 15 18:10:20 2024 00:23:31.492 write: IOPS=450, BW=113MiB/s (118MB/s)(1162MiB/10311msec); 0 zone resets 00:23:31.492 slat (usec): min=17, max=116414, avg=1195.60, stdev=4924.53 00:23:31.492 clat (usec): min=1446, max=600151, avg=140665.45, stdev=100299.45 00:23:31.492 lat (usec): min=1492, max=600193, avg=141861.05, stdev=101539.72 00:23:31.492 clat percentiles (msec): 00:23:31.492 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 24], 20.00th=[ 45], 00:23:31.492 | 30.00th=[ 72], 40.00th=[ 94], 50.00th=[ 122], 60.00th=[ 161], 00:23:31.492 | 70.00th=[ 194], 80.00th=[ 230], 90.00th=[ 279], 95.00th=[ 300], 00:23:31.492 | 99.00th=[ 409], 99.50th=[ 502], 99.90th=[ 584], 99.95th=[ 584], 00:23:31.492 | 99.99th=[ 600] 00:23:31.492 bw ( KiB/s): min=40960, max=197632, per=8.09%, avg=117369.25, stdev=41638.69, samples=20 00:23:31.492 iops : min= 160, max= 772, avg=458.45, stdev=162.68, samples=20 00:23:31.492 lat (msec) : 2=0.06%, 4=0.37%, 10=2.90%, 20=4.99%, 50=13.66% 00:23:31.492 lat (msec) : 100=21.45%, 250=40.20%, 500=15.90%, 750=0.47% 00:23:31.492 cpu : usr=1.13%, sys=1.48%, ctx=3380, majf=0, minf=1 00:23:31.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:23:31.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:31.492 issued rwts: total=0,4649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.492 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:31.492 job2: (groupid=0, jobs=1): err= 0: pid=3371016: Mon Apr 15 18:10:20 2024 00:23:31.492 write: IOPS=565, BW=141MiB/s (148MB/s)(1457MiB/10302msec); 0 zone resets 00:23:31.492 slat (usec): min=27, max=49033, avg=1225.45, stdev=3294.87 00:23:31.492 clat (usec): min=1304, max=589144, avg=111819.02, stdev=76189.80 00:23:31.492 lat (usec): min=1356, max=589190, avg=113044.46, stdev=77007.34 00:23:31.492 clat percentiles (msec): 00:23:31.492 | 1.00th=[ 7], 5.00th=[ 20], 10.00th=[ 31], 20.00th=[ 49], 00:23:31.492 | 30.00th=[ 67], 40.00th=[ 84], 50.00th=[ 96], 60.00th=[ 118], 00:23:31.492 | 70.00th=[ 133], 80.00th=[ 165], 90.00th=[ 203], 95.00th=[ 257], 00:23:31.492 | 99.00th=[ 326], 99.50th=[ 468], 99.90th=[ 575], 99.95th=[ 575], 00:23:31.492 | 99.99th=[ 592] 00:23:31.492 bw ( KiB/s): min=61440, max=280576, per=10.16%, avg=147497.15, stdev=57388.40, samples=20 00:23:31.492 iops : min= 240, max= 1096, avg=576.10, stdev=224.14, samples=20 00:23:31.492 lat (msec) : 2=0.14%, 4=0.34%, 10=1.65%, 20=2.94%, 50=16.89% 00:23:31.492 lat (msec) : 100=29.97%, 250=42.67%, 500=5.03%, 750=0.38% 00:23:31.492 cpu : usr=2.18%, sys=1.44%, ctx=3151, majf=0, minf=1 00:23:31.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:23:31.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:31.492 issued rwts: total=0,5826,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.492 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:31.492 job3: (groupid=0, jobs=1): err= 0: pid=3371017: Mon Apr 15 18:10:20 2024 00:23:31.492 write: IOPS=510, BW=128MiB/s (134MB/s)(1313MiB/10287msec); 0 zone resets 00:23:31.492 slat (usec): min=22, max=95724, avg=1146.53, stdev=4115.45 00:23:31.492 clat (usec): min=1373, max=570412, avg=124064.96, stdev=94849.21 00:23:31.492 lat (usec): min=1408, max=570463, avg=125211.49, stdev=95774.89 00:23:31.492 clat percentiles (msec): 00:23:31.492 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 16], 20.00th=[ 38], 00:23:31.492 | 30.00th=[ 55], 40.00th=[ 80], 50.00th=[ 99], 60.00th=[ 140], 00:23:31.492 | 70.00th=[ 174], 80.00th=[ 209], 90.00th=[ 266], 95.00th=[ 292], 00:23:31.492 | 99.00th=[ 355], 99.50th=[ 456], 99.90th=[ 558], 99.95th=[ 558], 00:23:31.492 | 99.99th=[ 567] 00:23:31.492 bw ( KiB/s): min=59392, max=265728, per=9.15%, avg=132806.20, stdev=56696.80, samples=20 00:23:31.492 iops : min= 232, max= 1038, avg=518.70, stdev=221.45, samples=20 00:23:31.492 lat (msec) : 2=0.19%, 4=0.76%, 10=4.82%, 20=7.27%, 50=15.14% 00:23:31.492 lat (msec) : 100=22.66%, 250=36.90%, 500=12.00%, 750=0.27% 00:23:31.492 cpu : usr=1.73%, sys=1.48%, ctx=3461, majf=0, minf=1 00:23:31.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:31.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:31.492 issued rwts: total=0,5252,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.492 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:31.492 job4: (groupid=0, jobs=1): err= 0: pid=3371018: Mon Apr 15 18:10:20 2024 00:23:31.492 write: IOPS=606, BW=152MiB/s (159MB/s)(1523MiB/10043msec); 0 zone resets 00:23:31.492 slat (usec): min=19, max=68876, avg=1333.22, stdev=3271.04 00:23:31.492 clat (usec): min=1303, max=319966, avg=104123.04, stdev=61671.93 00:23:31.492 lat (usec): min=1353, max=334677, avg=105456.26, stdev=62412.07 00:23:31.492 clat percentiles (msec): 00:23:31.492 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 32], 20.00th=[ 49], 00:23:31.492 | 30.00th=[ 65], 40.00th=[ 83], 50.00th=[ 101], 60.00th=[ 116], 00:23:31.492 | 70.00th=[ 127], 80.00th=[ 146], 90.00th=[ 182], 95.00th=[ 224], 00:23:31.492 | 99.00th=[ 292], 99.50th=[ 300], 99.90th=[ 309], 99.95th=[ 313], 00:23:31.492 | 99.99th=[ 321] 00:23:31.492 bw ( KiB/s): min=62976, max=325120, per=10.64%, avg=154337.85, stdev=72321.45, samples=20 00:23:31.492 iops : min= 246, max= 1270, avg=602.80, stdev=282.49, samples=20 00:23:31.492 lat (msec) : 2=0.08%, 4=0.33%, 10=1.54%, 20=3.96%, 50=15.50% 00:23:31.492 lat (msec) : 100=28.45%, 250=46.24%, 500=3.91% 00:23:31.492 cpu : usr=2.37%, sys=1.50%, ctx=2733, majf=0, minf=1 00:23:31.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:31.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:31.492 issued rwts: total=0,6092,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.492 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:31.492 job5: (groupid=0, jobs=1): err= 0: pid=3371019: Mon Apr 15 18:10:20 2024 00:23:31.492 write: IOPS=537, BW=134MiB/s (141MB/s)(1387MiB/10310msec); 0 zone resets 00:23:31.492 slat (usec): min=22, max=112517, avg=880.89, stdev=4404.48 00:23:31.492 clat (usec): min=1410, max=672860, avg=117992.25, stdev=92950.16 00:23:31.492 lat (usec): min=1449, max=672899, avg=118873.14, stdev=93852.08 00:23:31.492 clat percentiles (msec): 00:23:31.492 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 20], 20.00th=[ 37], 00:23:31.492 | 30.00th=[ 62], 40.00th=[ 80], 50.00th=[ 101], 60.00th=[ 121], 00:23:31.492 | 70.00th=[ 144], 80.00th=[ 184], 90.00th=[ 251], 95.00th=[ 300], 00:23:31.492 | 99.00th=[ 447], 99.50th=[ 567], 99.90th=[ 659], 99.95th=[ 659], 00:23:31.492 | 99.99th=[ 676] 00:23:31.492 bw ( KiB/s): min=34816, max=226304, per=9.67%, avg=140322.80, stdev=51218.17, samples=20 00:23:31.492 iops : min= 136, max= 884, avg=548.10, stdev=200.04, samples=20 00:23:31.492 lat (msec) : 2=0.07%, 4=0.32%, 10=3.35%, 20=6.83%, 50=14.50% 00:23:31.492 lat (msec) : 100=24.70%, 250=40.26%, 500=9.20%, 750=0.76% 00:23:31.492 cpu : usr=1.94%, sys=1.57%, ctx=4138, majf=0, minf=1 00:23:31.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:31.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:31.492 issued rwts: total=0,5546,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.492 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:31.492 job6: (groupid=0, jobs=1): err= 0: pid=3371020: Mon Apr 15 18:10:20 2024 00:23:31.492 write: IOPS=506, BW=127MiB/s (133MB/s)(1304MiB/10306msec); 0 zone resets 00:23:31.492 slat (usec): min=20, max=68173, avg=1127.97, stdev=3671.84 00:23:31.492 clat (usec): min=1169, max=446082, avg=125223.57, stdev=79852.14 00:23:31.492 lat (usec): min=1209, max=446132, avg=126351.54, stdev=80546.50 00:23:31.492 clat percentiles (msec): 00:23:31.492 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 31], 20.00th=[ 58], 00:23:31.492 | 30.00th=[ 81], 40.00th=[ 99], 50.00th=[ 117], 60.00th=[ 132], 00:23:31.492 | 70.00th=[ 155], 80.00th=[ 180], 90.00th=[ 228], 95.00th=[ 300], 00:23:31.493 | 99.00th=[ 372], 99.50th=[ 418], 99.90th=[ 443], 99.95th=[ 443], 00:23:31.493 | 99.99th=[ 447] 00:23:31.493 bw ( KiB/s): min=57344, max=242688, per=9.09%, avg=131890.15, stdev=50693.71, samples=20 00:23:31.493 iops : min= 224, max= 948, avg=515.15, stdev=198.01, samples=20 00:23:31.493 lat (msec) : 2=0.19%, 4=1.11%, 10=2.70%, 20=2.38%, 50=10.51% 00:23:31.493 lat (msec) : 100=23.89%, 250=50.67%, 500=8.55% 00:23:31.493 cpu : usr=1.90%, sys=1.40%, ctx=3265, majf=0, minf=1 00:23:31.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:31.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:31.493 issued rwts: total=0,5216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:31.493 job7: (groupid=0, jobs=1): err= 0: pid=3371021: Mon Apr 15 18:10:20 2024 00:23:31.493 write: IOPS=519, BW=130MiB/s (136MB/s)(1337MiB/10304msec); 0 zone resets 00:23:31.493 slat (usec): min=20, max=161088, avg=1272.72, stdev=4793.65 00:23:31.493 clat (usec): min=1134, max=587698, avg=121924.79, stdev=87862.54 00:23:31.493 lat (usec): min=1195, max=587748, avg=123197.51, stdev=88845.42 00:23:31.493 clat percentiles (msec): 00:23:31.493 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 19], 20.00th=[ 40], 00:23:31.493 | 30.00th=[ 56], 40.00th=[ 80], 50.00th=[ 113], 60.00th=[ 142], 00:23:31.493 | 70.00th=[ 165], 80.00th=[ 201], 90.00th=[ 241], 95.00th=[ 266], 00:23:31.493 | 99.00th=[ 368], 99.50th=[ 414], 99.90th=[ 558], 99.95th=[ 575], 00:23:31.493 | 99.99th=[ 592] 00:23:31.493 bw ( KiB/s): min=57344, max=345088, per=9.32%, avg=135249.60, stdev=66768.36, samples=20 00:23:31.493 iops : min= 224, max= 1348, avg=528.25, stdev=260.81, samples=20 00:23:31.493 lat (msec) : 2=0.13%, 4=0.50%, 10=4.96%, 20=5.27%, 50=15.52% 00:23:31.493 lat (msec) : 100=19.61%, 250=46.24%, 500=7.37%, 750=0.39% 00:23:31.493 cpu : usr=2.02%, sys=1.26%, ctx=3308, majf=0, minf=1 00:23:31.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:31.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:31.493 issued rwts: total=0,5348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:31.493 job8: (groupid=0, jobs=1): err= 0: pid=3371022: Mon Apr 15 18:10:20 2024 00:23:31.493 write: IOPS=442, BW=111MiB/s (116MB/s)(1139MiB/10302msec); 0 zone resets 00:23:31.493 slat (usec): min=29, max=114531, avg=1543.76, stdev=4706.90 00:23:31.493 clat (usec): min=1510, max=588524, avg=143091.14, stdev=93104.41 00:23:31.493 lat (usec): min=1574, max=588564, avg=144634.90, stdev=94362.80 00:23:31.493 clat percentiles (msec): 00:23:31.493 | 1.00th=[ 7], 5.00th=[ 16], 10.00th=[ 30], 20.00th=[ 52], 00:23:31.493 | 30.00th=[ 78], 40.00th=[ 109], 50.00th=[ 136], 60.00th=[ 167], 00:23:31.493 | 70.00th=[ 197], 80.00th=[ 218], 90.00th=[ 264], 95.00th=[ 296], 00:23:31.493 | 99.00th=[ 405], 99.50th=[ 485], 99.90th=[ 575], 99.95th=[ 575], 00:23:31.493 | 99.99th=[ 592] 00:23:31.493 bw ( KiB/s): min=53248, max=236032, per=7.92%, avg=114958.15, stdev=53978.71, samples=20 00:23:31.493 iops : min= 208, max= 922, avg=449.00, stdev=210.83, samples=20 00:23:31.493 lat (msec) : 2=0.02%, 4=0.29%, 10=2.26%, 20=3.89%, 50=12.78% 00:23:31.493 lat (msec) : 100=18.33%, 250=51.26%, 500=10.69%, 750=0.48% 00:23:31.493 cpu : usr=1.86%, sys=1.14%, ctx=2798, majf=0, minf=1 00:23:31.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:31.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:31.493 issued rwts: total=0,4555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:31.493 job9: (groupid=0, jobs=1): err= 0: pid=3371028: Mon Apr 15 18:10:20 2024 00:23:31.493 write: IOPS=501, BW=125MiB/s (131MB/s)(1294MiB/10318msec); 0 zone resets 00:23:31.493 slat (usec): min=24, max=121109, avg=1187.24, stdev=4826.49 00:23:31.493 clat (usec): min=1235, max=653845, avg=126326.62, stdev=96771.39 00:23:31.493 lat (usec): min=1357, max=653920, avg=127513.87, stdev=97908.66 00:23:31.493 clat percentiles (msec): 00:23:31.493 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 20], 20.00th=[ 37], 00:23:31.493 | 30.00th=[ 50], 40.00th=[ 82], 50.00th=[ 110], 60.00th=[ 136], 00:23:31.493 | 70.00th=[ 171], 80.00th=[ 215], 90.00th=[ 259], 95.00th=[ 296], 00:23:31.493 | 99.00th=[ 380], 99.50th=[ 485], 99.90th=[ 625], 99.95th=[ 642], 00:23:31.493 | 99.99th=[ 651] 00:23:31.493 bw ( KiB/s): min=59392, max=240640, per=9.01%, avg=130798.90, stdev=58945.05, samples=20 00:23:31.493 iops : min= 232, max= 940, avg=510.90, stdev=230.28, samples=20 00:23:31.493 lat (msec) : 2=0.10%, 4=0.77%, 10=2.28%, 20=6.92%, 50=20.22% 00:23:31.493 lat (msec) : 100=15.62%, 250=42.66%, 500=10.96%, 750=0.48% 00:23:31.493 cpu : usr=1.83%, sys=1.34%, ctx=3640, majf=0, minf=1 00:23:31.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:31.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:31.493 issued rwts: total=0,5174,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:31.493 job10: (groupid=0, jobs=1): err= 0: pid=3371030: Mon Apr 15 18:10:20 2024 00:23:31.493 write: IOPS=577, BW=144MiB/s (151MB/s)(1486MiB/10291msec); 0 zone resets 00:23:31.493 slat (usec): min=24, max=115169, avg=794.71, stdev=3760.20 00:23:31.493 clat (usec): min=1234, max=605757, avg=109861.57, stdev=89297.93 00:23:31.493 lat (usec): min=1279, max=605795, avg=110656.27, stdev=90220.41 00:23:31.493 clat percentiles (msec): 00:23:31.493 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 17], 20.00th=[ 29], 00:23:31.493 | 30.00th=[ 44], 40.00th=[ 58], 50.00th=[ 82], 60.00th=[ 127], 00:23:31.493 | 70.00th=[ 153], 80.00th=[ 180], 90.00th=[ 243], 95.00th=[ 279], 00:23:31.493 | 99.00th=[ 342], 99.50th=[ 451], 99.90th=[ 558], 99.95th=[ 558], 00:23:31.493 | 99.99th=[ 609] 00:23:31.493 bw ( KiB/s): min=61952, max=357888, per=10.37%, avg=150539.15, stdev=77096.59, samples=20 00:23:31.493 iops : min= 242, max= 1398, avg=588.00, stdev=301.17, samples=20 00:23:31.493 lat (msec) : 2=0.12%, 4=0.56%, 10=3.63%, 20=8.60%, 50=23.44% 00:23:31.493 lat (msec) : 100=17.50%, 250=37.01%, 500=8.85%, 750=0.30% 00:23:31.493 cpu : usr=1.90%, sys=1.73%, ctx=4570, majf=0, minf=1 00:23:31.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:23:31.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:31.493 issued rwts: total=0,5944,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:31.493 00:23:31.493 Run status group 0 (all jobs): 00:23:31.493 WRITE: bw=1417MiB/s (1486MB/s), 111MiB/s-152MiB/s (116MB/s-159MB/s), io=14.3GiB (15.3GB), run=10043-10320msec 00:23:31.493 00:23:31.493 Disk stats (read/write): 00:23:31.493 nvme0n1: ios=49/9718, merge=0/0, ticks=2434/1228227, in_queue=1230661, util=99.97% 00:23:31.493 nvme10n1: ios=46/9246, merge=0/0, ticks=52/1248209, in_queue=1248261, util=95.51% 00:23:31.493 nvme1n1: ios=44/11602, merge=0/0, ticks=1233/1239701, in_queue=1240934, util=100.00% 00:23:31.493 nvme2n1: ios=42/10462, merge=0/0, ticks=928/1244496, in_queue=1245424, util=99.82% 00:23:31.493 nvme3n1: ios=43/11864, merge=0/0, ticks=129/1217118, in_queue=1217247, util=96.96% 00:23:31.493 nvme4n1: ios=0/11038, merge=0/0, ticks=0/1247643, in_queue=1247643, util=96.86% 00:23:31.493 nvme5n1: ios=0/10381, merge=0/0, ticks=0/1246607, in_queue=1246607, util=97.20% 00:23:31.493 nvme6n1: ios=43/10640, merge=0/0, ticks=2089/1219768, in_queue=1221857, util=99.85% 00:23:31.493 nvme7n1: ios=0/9057, merge=0/0, ticks=0/1239864, in_queue=1239864, util=98.46% 00:23:31.493 nvme8n1: ios=0/10286, merge=0/0, ticks=0/1242477, in_queue=1242477, util=98.87% 00:23:31.493 nvme9n1: ios=0/11844, merge=0/0, ticks=0/1251107, in_queue=1251107, util=99.10% 00:23:31.493 18:10:20 -- target/multiconnection.sh@36 -- # sync 00:23:31.493 18:10:20 -- target/multiconnection.sh@37 -- # seq 1 11 00:23:31.493 18:10:20 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.493 18:10:20 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:31.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:31.751 18:10:20 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:31.751 18:10:20 -- common/autotest_common.sh@1205 -- # local i=0 00:23:31.751 18:10:20 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:31.751 18:10:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:23:31.751 18:10:20 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:31.751 18:10:20 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK1 00:23:32.008 18:10:20 -- common/autotest_common.sh@1217 -- # return 0 00:23:32.008 18:10:20 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:32.008 18:10:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.008 18:10:20 -- common/autotest_common.sh@10 -- # set +x 00:23:32.008 18:10:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:32.008 18:10:20 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.008 18:10:20 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:32.265 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:32.265 18:10:21 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:32.265 18:10:21 -- common/autotest_common.sh@1205 -- # local i=0 00:23:32.265 18:10:21 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:32.265 18:10:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:23:32.265 18:10:21 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:32.265 18:10:21 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK2 00:23:32.265 18:10:21 -- common/autotest_common.sh@1217 -- # return 0 00:23:32.265 18:10:21 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:32.265 18:10:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.265 18:10:21 -- common/autotest_common.sh@10 -- # set +x 00:23:32.265 18:10:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:32.265 18:10:21 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.265 18:10:21 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:32.523 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:32.523 18:10:21 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:32.523 18:10:21 -- common/autotest_common.sh@1205 -- # local i=0 00:23:32.523 18:10:21 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:32.523 18:10:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:23:32.523 18:10:21 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:32.523 18:10:21 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK3 00:23:32.523 18:10:21 -- common/autotest_common.sh@1217 -- # return 0 00:23:32.523 18:10:21 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:32.523 18:10:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.523 18:10:21 -- common/autotest_common.sh@10 -- # set +x 00:23:32.523 18:10:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:32.523 18:10:21 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.523 18:10:21 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:32.780 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:32.780 18:10:21 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:32.780 18:10:21 -- common/autotest_common.sh@1205 -- # local i=0 00:23:32.780 18:10:21 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:32.780 18:10:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:23:32.780 18:10:21 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:32.780 18:10:21 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK4 00:23:32.780 18:10:21 -- common/autotest_common.sh@1217 -- # return 0 00:23:32.780 18:10:21 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:32.780 18:10:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.780 18:10:21 -- common/autotest_common.sh@10 -- # set +x 00:23:33.037 18:10:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.038 18:10:21 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:33.038 18:10:21 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:33.295 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:33.295 18:10:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:33.295 18:10:22 -- common/autotest_common.sh@1205 -- # local i=0 00:23:33.295 18:10:22 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:33.295 18:10:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:23:33.295 18:10:22 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:33.295 18:10:22 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK5 00:23:33.295 18:10:22 -- common/autotest_common.sh@1217 -- # return 0 00:23:33.295 18:10:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:33.295 18:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.296 18:10:22 -- common/autotest_common.sh@10 -- # set +x 00:23:33.296 18:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.296 18:10:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:33.296 18:10:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:33.296 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:33.296 18:10:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:33.296 18:10:22 -- common/autotest_common.sh@1205 -- # local i=0 00:23:33.296 18:10:22 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:33.296 18:10:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:23:33.296 18:10:22 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:33.296 18:10:22 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK6 00:23:33.553 18:10:22 -- common/autotest_common.sh@1217 -- # return 0 00:23:33.553 18:10:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:33.553 18:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.553 18:10:22 -- common/autotest_common.sh@10 -- # set +x 00:23:33.553 18:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.553 18:10:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:33.553 18:10:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:33.553 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:33.553 18:10:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:33.553 18:10:22 -- common/autotest_common.sh@1205 -- # local i=0 00:23:33.553 18:10:22 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:33.553 18:10:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:23:33.553 18:10:22 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:33.553 18:10:22 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK7 00:23:33.553 18:10:22 -- common/autotest_common.sh@1217 -- # return 0 00:23:33.553 18:10:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:33.553 18:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.553 18:10:22 -- common/autotest_common.sh@10 -- # set +x 00:23:33.553 18:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.553 18:10:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:33.553 18:10:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:33.811 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:33.811 18:10:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:33.811 18:10:22 -- common/autotest_common.sh@1205 -- # local i=0 00:23:33.811 18:10:22 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:33.811 18:10:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:23:33.811 18:10:22 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:33.811 18:10:22 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK8 00:23:33.811 18:10:22 -- common/autotest_common.sh@1217 -- # return 0 00:23:33.811 18:10:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:33.811 18:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.811 18:10:22 -- common/autotest_common.sh@10 -- # set +x 00:23:33.811 18:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.811 18:10:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:33.811 18:10:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:33.811 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:33.811 18:10:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:33.811 18:10:22 -- common/autotest_common.sh@1205 -- # local i=0 00:23:33.811 18:10:22 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:33.811 18:10:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:23:33.811 18:10:22 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:33.811 18:10:22 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK9 00:23:33.811 18:10:22 -- common/autotest_common.sh@1217 -- # return 0 00:23:33.811 18:10:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:33.811 18:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.811 18:10:22 -- common/autotest_common.sh@10 -- # set +x 00:23:33.811 18:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.811 18:10:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:33.811 18:10:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:34.069 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:34.069 18:10:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:34.069 18:10:22 -- common/autotest_common.sh@1205 -- # local i=0 00:23:34.069 18:10:22 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:34.069 18:10:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:23:34.069 18:10:22 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:34.069 18:10:22 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK10 00:23:34.069 18:10:22 -- common/autotest_common.sh@1217 -- # return 0 00:23:34.069 18:10:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:34.069 18:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.069 18:10:22 -- common/autotest_common.sh@10 -- # set +x 00:23:34.069 18:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.069 18:10:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:34.069 18:10:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:34.069 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:34.069 18:10:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:34.069 18:10:22 -- common/autotest_common.sh@1205 -- # local i=0 00:23:34.069 18:10:22 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:34.069 18:10:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:23:34.069 18:10:22 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:34.069 18:10:22 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK11 00:23:34.069 18:10:22 -- common/autotest_common.sh@1217 -- # return 0 00:23:34.069 18:10:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:34.069 18:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.069 18:10:22 -- common/autotest_common.sh@10 -- # set +x 00:23:34.069 18:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.069 18:10:22 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:34.069 18:10:22 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:34.069 18:10:22 -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:34.069 18:10:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:34.069 18:10:22 -- nvmf/common.sh@117 -- # sync 00:23:34.069 18:10:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:34.069 18:10:22 -- nvmf/common.sh@120 -- # set +e 00:23:34.069 18:10:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:34.069 18:10:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:34.069 rmmod nvme_tcp 00:23:34.069 rmmod nvme_fabrics 00:23:34.069 rmmod nvme_keyring 00:23:34.069 18:10:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:34.069 18:10:23 -- nvmf/common.sh@124 -- # set -e 00:23:34.069 18:10:23 -- nvmf/common.sh@125 -- # return 0 00:23:34.069 18:10:23 -- nvmf/common.sh@478 -- # '[' -n 3365734 ']' 00:23:34.069 18:10:23 -- nvmf/common.sh@479 -- # killprocess 3365734 00:23:34.069 18:10:23 -- common/autotest_common.sh@936 -- # '[' -z 3365734 ']' 00:23:34.069 18:10:23 -- common/autotest_common.sh@940 -- # kill -0 3365734 00:23:34.069 18:10:23 -- common/autotest_common.sh@941 -- # uname 00:23:34.327 18:10:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:34.327 18:10:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3365734 00:23:34.327 18:10:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:34.327 18:10:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:34.327 18:10:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3365734' 00:23:34.327 killing process with pid 3365734 00:23:34.327 18:10:23 -- common/autotest_common.sh@955 -- # kill 3365734 00:23:34.327 18:10:23 -- common/autotest_common.sh@960 -- # wait 3365734 00:23:34.892 18:10:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:34.892 18:10:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:34.892 18:10:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:34.892 18:10:23 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:34.892 18:10:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:34.892 18:10:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.892 18:10:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.892 18:10:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.790 18:10:25 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:36.790 00:23:36.790 real 1m1.529s 00:23:36.790 user 3m32.977s 00:23:36.790 sys 0m23.817s 00:23:36.790 18:10:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:36.790 18:10:25 -- common/autotest_common.sh@10 -- # set +x 00:23:36.790 ************************************ 00:23:36.790 END TEST nvmf_multiconnection 00:23:36.790 ************************************ 00:23:36.790 18:10:25 -- nvmf/nvmf.sh@67 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:36.790 18:10:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:36.790 18:10:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:36.790 18:10:25 -- common/autotest_common.sh@10 -- # set +x 00:23:37.048 ************************************ 00:23:37.048 START TEST nvmf_initiator_timeout 00:23:37.048 ************************************ 00:23:37.048 18:10:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:37.048 * Looking for test storage... 00:23:37.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:37.048 18:10:25 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:37.048 18:10:25 -- nvmf/common.sh@7 -- # uname -s 00:23:37.048 18:10:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.048 18:10:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.048 18:10:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.048 18:10:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.048 18:10:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.048 18:10:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.048 18:10:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.048 18:10:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.048 18:10:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.048 18:10:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.048 18:10:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:37.048 18:10:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:37.048 18:10:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.048 18:10:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.048 18:10:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:37.048 18:10:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.048 18:10:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:37.048 18:10:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.048 18:10:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.048 18:10:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.048 18:10:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.048 18:10:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.048 18:10:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.048 18:10:25 -- paths/export.sh@5 -- # export PATH 00:23:37.048 18:10:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.048 18:10:25 -- nvmf/common.sh@47 -- # : 0 00:23:37.048 18:10:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:37.048 18:10:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:37.048 18:10:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.048 18:10:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.048 18:10:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.048 18:10:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:37.048 18:10:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:37.049 18:10:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:37.049 18:10:25 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:37.049 18:10:25 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:37.049 18:10:25 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:37.049 18:10:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:37.049 18:10:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.049 18:10:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:37.049 18:10:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:37.049 18:10:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:37.049 18:10:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.049 18:10:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.049 18:10:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.049 18:10:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:37.049 18:10:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:37.049 18:10:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:37.049 18:10:25 -- common/autotest_common.sh@10 -- # set +x 00:23:39.576 18:10:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:39.576 18:10:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:39.576 18:10:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:39.576 18:10:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:39.576 18:10:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:39.576 18:10:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:39.576 18:10:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:39.576 18:10:28 -- nvmf/common.sh@295 -- # net_devs=() 00:23:39.576 18:10:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:39.577 18:10:28 -- nvmf/common.sh@296 -- # e810=() 00:23:39.577 18:10:28 -- nvmf/common.sh@296 -- # local -ga e810 00:23:39.577 18:10:28 -- nvmf/common.sh@297 -- # x722=() 00:23:39.577 18:10:28 -- nvmf/common.sh@297 -- # local -ga x722 00:23:39.577 18:10:28 -- nvmf/common.sh@298 -- # mlx=() 00:23:39.577 18:10:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:39.577 18:10:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.577 18:10:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.577 18:10:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.577 18:10:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.577 18:10:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.577 18:10:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.577 18:10:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.577 18:10:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.577 18:10:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.577 18:10:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.577 18:10:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.577 18:10:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:39.577 18:10:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:39.577 18:10:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:39.577 18:10:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.577 18:10:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:39.577 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:39.577 18:10:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.577 18:10:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:39.577 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:39.577 18:10:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:39.577 18:10:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.577 18:10:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.577 18:10:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:39.577 18:10:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.577 18:10:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:39.577 Found net devices under 0000:84:00.0: cvl_0_0 00:23:39.577 18:10:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.577 18:10:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.577 18:10:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.577 18:10:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:39.577 18:10:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.577 18:10:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:39.577 Found net devices under 0000:84:00.1: cvl_0_1 00:23:39.577 18:10:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.577 18:10:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:39.577 18:10:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:39.577 18:10:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:39.577 18:10:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.577 18:10:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.577 18:10:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.577 18:10:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:39.577 18:10:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.577 18:10:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.577 18:10:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:39.577 18:10:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.577 18:10:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.577 18:10:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:39.577 18:10:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:39.577 18:10:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.577 18:10:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.577 18:10:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.577 18:10:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.577 18:10:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:39.577 18:10:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.577 18:10:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.577 18:10:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.577 18:10:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:39.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:23:39.577 00:23:39.577 --- 10.0.0.2 ping statistics --- 00:23:39.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.577 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:23:39.577 18:10:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:23:39.577 00:23:39.577 --- 10.0.0.1 ping statistics --- 00:23:39.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.577 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:23:39.577 18:10:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.577 18:10:28 -- nvmf/common.sh@411 -- # return 0 00:23:39.577 18:10:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:39.577 18:10:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.577 18:10:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:39.577 18:10:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.577 18:10:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:39.577 18:10:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:39.577 18:10:28 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:39.577 18:10:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:39.577 18:10:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:39.577 18:10:28 -- common/autotest_common.sh@10 -- # set +x 00:23:39.577 18:10:28 -- nvmf/common.sh@470 -- # nvmfpid=3374477 00:23:39.577 18:10:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:39.577 18:10:28 -- nvmf/common.sh@471 -- # waitforlisten 3374477 00:23:39.577 18:10:28 -- common/autotest_common.sh@817 -- # '[' -z 3374477 ']' 00:23:39.577 18:10:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.577 18:10:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:39.577 18:10:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.577 18:10:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:39.577 18:10:28 -- common/autotest_common.sh@10 -- # set +x 00:23:39.577 [2024-04-15 18:10:28.416792] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:23:39.577 [2024-04-15 18:10:28.416896] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.577 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.577 [2024-04-15 18:10:28.500953] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.835 [2024-04-15 18:10:28.601630] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.835 [2024-04-15 18:10:28.601691] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.835 [2024-04-15 18:10:28.601709] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.835 [2024-04-15 18:10:28.601730] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.835 [2024-04-15 18:10:28.601743] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.835 [2024-04-15 18:10:28.601813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.835 [2024-04-15 18:10:28.603082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.835 [2024-04-15 18:10:28.603130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.835 [2024-04-15 18:10:28.603135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.835 18:10:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:39.835 18:10:28 -- common/autotest_common.sh@850 -- # return 0 00:23:39.835 18:10:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:39.835 18:10:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:39.835 18:10:28 -- common/autotest_common.sh@10 -- # set +x 00:23:39.835 18:10:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.835 18:10:28 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:39.835 18:10:28 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:39.835 18:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.835 18:10:28 -- common/autotest_common.sh@10 -- # set +x 00:23:39.835 Malloc0 00:23:39.835 18:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:39.835 18:10:28 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:39.835 18:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.835 18:10:28 -- common/autotest_common.sh@10 -- # set +x 00:23:39.835 Delay0 00:23:39.835 18:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:39.835 18:10:28 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.835 18:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.835 18:10:28 -- common/autotest_common.sh@10 -- # set +x 00:23:40.092 [2024-04-15 18:10:28.789826] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.092 18:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.092 18:10:28 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:40.092 18:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.092 18:10:28 -- common/autotest_common.sh@10 -- # set +x 00:23:40.092 18:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.092 18:10:28 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:40.092 18:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.092 18:10:28 -- common/autotest_common.sh@10 -- # set +x 00:23:40.092 18:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.092 18:10:28 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.092 18:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.092 18:10:28 -- common/autotest_common.sh@10 -- # set +x 00:23:40.092 [2024-04-15 18:10:28.818098] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.092 18:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.092 18:10:28 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:40.656 18:10:29 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:40.656 18:10:29 -- common/autotest_common.sh@1184 -- # local i=0 00:23:40.656 18:10:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:40.656 18:10:29 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:23:40.656 18:10:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:42.612 18:10:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:42.612 18:10:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:42.612 18:10:31 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:23:42.612 18:10:31 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:23:42.612 18:10:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:42.612 18:10:31 -- common/autotest_common.sh@1194 -- # return 0 00:23:42.612 18:10:31 -- target/initiator_timeout.sh@35 -- # fio_pid=3374816 00:23:42.612 18:10:31 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:42.612 18:10:31 -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:42.612 [global] 00:23:42.612 thread=1 00:23:42.612 invalidate=1 00:23:42.612 rw=write 00:23:42.612 time_based=1 00:23:42.612 runtime=60 00:23:42.612 ioengine=libaio 00:23:42.612 direct=1 00:23:42.612 bs=4096 00:23:42.612 iodepth=1 00:23:42.612 norandommap=0 00:23:42.612 numjobs=1 00:23:42.612 00:23:42.612 verify_dump=1 00:23:42.612 verify_backlog=512 00:23:42.612 verify_state_save=0 00:23:42.612 do_verify=1 00:23:42.612 verify=crc32c-intel 00:23:42.612 [job0] 00:23:42.612 filename=/dev/nvme0n1 00:23:42.612 Could not set queue depth (nvme0n1) 00:23:42.870 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:42.870 fio-3.35 00:23:42.870 Starting 1 thread 00:23:46.161 18:10:34 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:46.161 18:10:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.161 18:10:34 -- common/autotest_common.sh@10 -- # set +x 00:23:46.161 true 00:23:46.161 18:10:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.161 18:10:34 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:46.161 18:10:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.161 18:10:34 -- common/autotest_common.sh@10 -- # set +x 00:23:46.161 true 00:23:46.161 18:10:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.161 18:10:34 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:46.161 18:10:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.161 18:10:34 -- common/autotest_common.sh@10 -- # set +x 00:23:46.161 true 00:23:46.161 18:10:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.161 18:10:34 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:46.161 18:10:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.161 18:10:34 -- common/autotest_common.sh@10 -- # set +x 00:23:46.161 true 00:23:46.161 18:10:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.161 18:10:34 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:48.688 18:10:37 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:48.688 18:10:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.688 18:10:37 -- common/autotest_common.sh@10 -- # set +x 00:23:48.688 true 00:23:48.688 18:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.688 18:10:37 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:48.688 18:10:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.688 18:10:37 -- common/autotest_common.sh@10 -- # set +x 00:23:48.688 true 00:23:48.688 18:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.688 18:10:37 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:48.688 18:10:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.688 18:10:37 -- common/autotest_common.sh@10 -- # set +x 00:23:48.688 true 00:23:48.688 18:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.688 18:10:37 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:48.688 18:10:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.688 18:10:37 -- common/autotest_common.sh@10 -- # set +x 00:23:48.688 true 00:23:48.688 18:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.688 18:10:37 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:48.689 18:10:37 -- target/initiator_timeout.sh@54 -- # wait 3374816 00:24:44.894 00:24:44.894 job0: (groupid=0, jobs=1): err= 0: pid=3374893: Mon Apr 15 18:11:31 2024 00:24:44.894 read: IOPS=10, BW=41.6KiB/s (42.6kB/s)(2496KiB/60010msec) 00:24:44.894 slat (nsec): min=6138, max=52836, avg=14108.80, stdev=6849.42 00:24:44.894 clat (usec): min=313, max=41075k, avg=95549.22, stdev=1643224.01 00:24:44.894 lat (usec): min=320, max=41075k, avg=95563.33, stdev=1643224.06 00:24:44.894 clat percentiles (usec): 00:24:44.894 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 338], 00:24:44.894 | 20.00th=[ 383], 30.00th=[ 41157], 40.00th=[ 41157], 00:24:44.894 | 50.00th=[ 41157], 60.00th=[ 41157], 70.00th=[ 41157], 00:24:44.894 | 80.00th=[ 41157], 90.00th=[ 42206], 95.00th=[ 42206], 00:24:44.894 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[17112761], 00:24:44.894 | 99.95th=[17112761], 99.99th=[17112761] 00:24:44.894 write: IOPS=17, BW=68.3KiB/s (69.9kB/s)(4096KiB/60010msec); 0 zone resets 00:24:44.894 slat (usec): min=7, max=31701, avg=55.65, stdev=1069.12 00:24:44.894 clat (usec): min=218, max=791, avg=307.80, stdev=59.82 00:24:44.894 lat (usec): min=226, max=32145, avg=363.45, stdev=1076.23 00:24:44.894 clat percentiles (usec): 00:24:44.894 | 1.00th=[ 227], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 260], 00:24:44.894 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 306], 00:24:44.894 | 70.00th=[ 318], 80.00th=[ 375], 90.00th=[ 400], 95.00th=[ 404], 00:24:44.894 | 99.00th=[ 474], 99.50th=[ 474], 99.90th=[ 594], 99.95th=[ 791], 00:24:44.894 | 99.99th=[ 791] 00:24:44.894 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:24:44.894 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:24:44.894 lat (usec) : 250=9.22%, 500=63.05%, 750=0.36%, 1000=0.06% 00:24:44.894 lat (msec) : 50=27.25%, >=2000=0.06% 00:24:44.894 cpu : usr=0.03%, sys=0.03%, ctx=1653, majf=0, minf=2 00:24:44.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:44.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.894 issued rwts: total=624,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:44.894 00:24:44.894 Run status group 0 (all jobs): 00:24:44.894 READ: bw=41.6KiB/s (42.6kB/s), 41.6KiB/s-41.6KiB/s (42.6kB/s-42.6kB/s), io=2496KiB (2556kB), run=60010-60010msec 00:24:44.895 WRITE: bw=68.3KiB/s (69.9kB/s), 68.3KiB/s-68.3KiB/s (69.9kB/s-69.9kB/s), io=4096KiB (4194kB), run=60010-60010msec 00:24:44.895 00:24:44.895 Disk stats (read/write): 00:24:44.895 nvme0n1: ios=674/1024, merge=0/0, ticks=18680/298, in_queue=18978, util=99.82% 00:24:44.895 18:11:31 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:44.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:44.895 18:11:31 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:44.895 18:11:31 -- common/autotest_common.sh@1205 -- # local i=0 00:24:44.895 18:11:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:24:44.895 18:11:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:44.895 18:11:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:24:44.895 18:11:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:44.895 18:11:31 -- common/autotest_common.sh@1217 -- # return 0 00:24:44.895 18:11:31 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:44.895 18:11:31 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:44.895 nvmf hotplug test: fio successful as expected 00:24:44.895 18:11:31 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:44.895 18:11:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.895 18:11:31 -- common/autotest_common.sh@10 -- # set +x 00:24:44.895 18:11:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.895 18:11:31 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:44.895 18:11:31 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:44.895 18:11:31 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:44.895 18:11:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:44.895 18:11:31 -- nvmf/common.sh@117 -- # sync 00:24:44.895 18:11:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:44.895 18:11:31 -- nvmf/common.sh@120 -- # set +e 00:24:44.895 18:11:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:44.895 18:11:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:44.895 rmmod nvme_tcp 00:24:44.895 rmmod nvme_fabrics 00:24:44.895 rmmod nvme_keyring 00:24:44.895 18:11:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:44.895 18:11:31 -- nvmf/common.sh@124 -- # set -e 00:24:44.895 18:11:31 -- nvmf/common.sh@125 -- # return 0 00:24:44.895 18:11:31 -- nvmf/common.sh@478 -- # '[' -n 3374477 ']' 00:24:44.895 18:11:31 -- nvmf/common.sh@479 -- # killprocess 3374477 00:24:44.895 18:11:31 -- common/autotest_common.sh@936 -- # '[' -z 3374477 ']' 00:24:44.895 18:11:31 -- common/autotest_common.sh@940 -- # kill -0 3374477 00:24:44.895 18:11:31 -- common/autotest_common.sh@941 -- # uname 00:24:44.895 18:11:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:44.895 18:11:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3374477 00:24:44.895 18:11:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:44.895 18:11:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:44.895 18:11:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3374477' 00:24:44.895 killing process with pid 3374477 00:24:44.895 18:11:31 -- common/autotest_common.sh@955 -- # kill 3374477 00:24:44.895 18:11:31 -- common/autotest_common.sh@960 -- # wait 3374477 00:24:44.895 18:11:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:44.895 18:11:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:44.895 18:11:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:44.895 18:11:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:44.895 18:11:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:44.895 18:11:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.895 18:11:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:44.895 18:11:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.462 18:11:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:45.462 00:24:45.462 real 1m8.438s 00:24:45.462 user 4m10.803s 00:24:45.462 sys 0m6.609s 00:24:45.462 18:11:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:45.463 18:11:34 -- common/autotest_common.sh@10 -- # set +x 00:24:45.463 ************************************ 00:24:45.463 END TEST nvmf_initiator_timeout 00:24:45.463 ************************************ 00:24:45.463 18:11:34 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:24:45.463 18:11:34 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:24:45.463 18:11:34 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:24:45.463 18:11:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:45.463 18:11:34 -- common/autotest_common.sh@10 -- # set +x 00:24:47.994 18:11:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:47.994 18:11:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:47.994 18:11:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:47.994 18:11:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:47.994 18:11:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:47.994 18:11:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:47.994 18:11:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:47.994 18:11:36 -- nvmf/common.sh@295 -- # net_devs=() 00:24:47.994 18:11:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:47.994 18:11:36 -- nvmf/common.sh@296 -- # e810=() 00:24:47.994 18:11:36 -- nvmf/common.sh@296 -- # local -ga e810 00:24:47.994 18:11:36 -- nvmf/common.sh@297 -- # x722=() 00:24:47.994 18:11:36 -- nvmf/common.sh@297 -- # local -ga x722 00:24:47.994 18:11:36 -- nvmf/common.sh@298 -- # mlx=() 00:24:47.994 18:11:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:47.994 18:11:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.994 18:11:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.994 18:11:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.995 18:11:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.995 18:11:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.995 18:11:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.995 18:11:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.995 18:11:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.995 18:11:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.995 18:11:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.995 18:11:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.995 18:11:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:47.995 18:11:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:47.995 18:11:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:47.995 18:11:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:47.995 18:11:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:47.995 18:11:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:47.995 18:11:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:47.995 18:11:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:47.995 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:47.995 18:11:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:47.995 18:11:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:47.995 18:11:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.995 18:11:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.995 18:11:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:47.995 18:11:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:47.995 18:11:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:47.995 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:47.995 18:11:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:47.995 18:11:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:47.995 18:11:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.995 18:11:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.995 18:11:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:47.995 18:11:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:47.995 18:11:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:47.995 18:11:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:47.995 18:11:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:47.995 18:11:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.995 18:11:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:47.995 18:11:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.995 18:11:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:47.995 Found net devices under 0000:84:00.0: cvl_0_0 00:24:47.995 18:11:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.995 18:11:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:47.995 18:11:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.995 18:11:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:47.995 18:11:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.995 18:11:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:47.995 Found net devices under 0000:84:00.1: cvl_0_1 00:24:47.995 18:11:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.995 18:11:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:47.995 18:11:36 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.995 18:11:36 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:24:47.995 18:11:36 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:47.995 18:11:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:47.995 18:11:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:47.995 18:11:36 -- common/autotest_common.sh@10 -- # set +x 00:24:47.995 ************************************ 00:24:47.995 START TEST nvmf_perf_adq 00:24:47.995 ************************************ 00:24:47.995 18:11:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:47.995 * Looking for test storage... 00:24:47.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:47.995 18:11:36 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:47.995 18:11:36 -- nvmf/common.sh@7 -- # uname -s 00:24:47.995 18:11:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.995 18:11:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.995 18:11:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.995 18:11:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.995 18:11:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.995 18:11:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.995 18:11:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.995 18:11:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.995 18:11:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.995 18:11:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.995 18:11:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:47.995 18:11:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:47.995 18:11:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.995 18:11:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.995 18:11:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:47.995 18:11:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.995 18:11:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.995 18:11:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.995 18:11:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.995 18:11:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.995 18:11:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.995 18:11:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.995 18:11:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.995 18:11:36 -- paths/export.sh@5 -- # export PATH 00:24:47.995 18:11:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.995 18:11:36 -- nvmf/common.sh@47 -- # : 0 00:24:47.995 18:11:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:47.995 18:11:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:47.995 18:11:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.995 18:11:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.995 18:11:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.995 18:11:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:47.995 18:11:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:47.995 18:11:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:47.995 18:11:36 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:47.995 18:11:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:47.995 18:11:36 -- common/autotest_common.sh@10 -- # set +x 00:24:50.527 18:11:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:50.527 18:11:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:50.527 18:11:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:50.527 18:11:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:50.527 18:11:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:50.527 18:11:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:50.527 18:11:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:50.527 18:11:39 -- nvmf/common.sh@295 -- # net_devs=() 00:24:50.527 18:11:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:50.527 18:11:39 -- nvmf/common.sh@296 -- # e810=() 00:24:50.527 18:11:39 -- nvmf/common.sh@296 -- # local -ga e810 00:24:50.527 18:11:39 -- nvmf/common.sh@297 -- # x722=() 00:24:50.527 18:11:39 -- nvmf/common.sh@297 -- # local -ga x722 00:24:50.527 18:11:39 -- nvmf/common.sh@298 -- # mlx=() 00:24:50.527 18:11:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:50.527 18:11:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.527 18:11:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.527 18:11:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.527 18:11:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.527 18:11:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.527 18:11:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.527 18:11:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.527 18:11:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.527 18:11:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.527 18:11:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.527 18:11:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.527 18:11:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:50.527 18:11:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:50.527 18:11:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:50.527 18:11:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:50.527 18:11:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:50.527 18:11:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:50.527 18:11:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.527 18:11:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:50.527 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:50.527 18:11:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.527 18:11:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.527 18:11:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.527 18:11:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.527 18:11:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.527 18:11:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.527 18:11:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:50.527 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:50.527 18:11:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.527 18:11:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.527 18:11:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.527 18:11:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.527 18:11:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.527 18:11:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:50.527 18:11:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:50.527 18:11:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:50.527 18:11:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.527 18:11:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.527 18:11:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:50.527 18:11:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.527 18:11:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:50.527 Found net devices under 0000:84:00.0: cvl_0_0 00:24:50.527 18:11:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.527 18:11:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.527 18:11:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.527 18:11:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:50.527 18:11:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.527 18:11:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:50.527 Found net devices under 0000:84:00.1: cvl_0_1 00:24:50.527 18:11:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.527 18:11:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:50.527 18:11:39 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.527 18:11:39 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:50.527 18:11:39 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:50.527 18:11:39 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:24:50.527 18:11:39 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:50.786 18:11:39 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:52.728 18:11:41 -- target/perf_adq.sh@54 -- # sleep 5 00:24:57.996 18:11:46 -- target/perf_adq.sh@67 -- # nvmftestinit 00:24:57.996 18:11:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:57.996 18:11:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.996 18:11:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:57.996 18:11:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:57.996 18:11:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:57.996 18:11:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.996 18:11:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.996 18:11:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.996 18:11:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:57.996 18:11:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:57.996 18:11:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:57.996 18:11:46 -- common/autotest_common.sh@10 -- # set +x 00:24:57.996 18:11:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:57.996 18:11:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:57.996 18:11:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:57.996 18:11:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:57.996 18:11:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:57.996 18:11:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:57.996 18:11:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:57.996 18:11:46 -- nvmf/common.sh@295 -- # net_devs=() 00:24:57.996 18:11:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:57.996 18:11:46 -- nvmf/common.sh@296 -- # e810=() 00:24:57.996 18:11:46 -- nvmf/common.sh@296 -- # local -ga e810 00:24:57.996 18:11:46 -- nvmf/common.sh@297 -- # x722=() 00:24:57.996 18:11:46 -- nvmf/common.sh@297 -- # local -ga x722 00:24:57.996 18:11:46 -- nvmf/common.sh@298 -- # mlx=() 00:24:57.996 18:11:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:57.996 18:11:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.996 18:11:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.996 18:11:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.996 18:11:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.996 18:11:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.996 18:11:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.996 18:11:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.996 18:11:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.996 18:11:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.996 18:11:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.996 18:11:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.996 18:11:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:57.996 18:11:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:57.996 18:11:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:57.996 18:11:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:57.996 18:11:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:57.996 18:11:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:57.996 18:11:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:57.996 18:11:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:57.996 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:57.996 18:11:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:57.996 18:11:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:57.996 18:11:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.996 18:11:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.996 18:11:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:57.996 18:11:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:57.997 18:11:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:57.997 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:57.997 18:11:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:57.997 18:11:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:57.997 18:11:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.997 18:11:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.997 18:11:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:57.997 18:11:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:57.997 18:11:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:57.997 18:11:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:57.997 18:11:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:57.997 18:11:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.997 18:11:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:57.997 18:11:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.997 18:11:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:57.997 Found net devices under 0000:84:00.0: cvl_0_0 00:24:57.997 18:11:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.997 18:11:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:57.997 18:11:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.997 18:11:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:57.997 18:11:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.997 18:11:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:57.997 Found net devices under 0000:84:00.1: cvl_0_1 00:24:57.997 18:11:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.997 18:11:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:57.997 18:11:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:57.997 18:11:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:57.997 18:11:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:57.997 18:11:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:57.997 18:11:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.997 18:11:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.997 18:11:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.997 18:11:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:57.997 18:11:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.997 18:11:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.997 18:11:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:57.997 18:11:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.997 18:11:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.997 18:11:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:57.997 18:11:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:57.997 18:11:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.997 18:11:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.997 18:11:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.997 18:11:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.997 18:11:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:57.997 18:11:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.997 18:11:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.997 18:11:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.997 18:11:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:57.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:24:57.997 00:24:57.997 --- 10.0.0.2 ping statistics --- 00:24:57.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.997 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:24:57.997 18:11:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:24:57.997 00:24:57.997 --- 10.0.0.1 ping statistics --- 00:24:57.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.997 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:24:57.997 18:11:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.997 18:11:46 -- nvmf/common.sh@411 -- # return 0 00:24:57.997 18:11:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:57.997 18:11:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.997 18:11:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:57.997 18:11:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:57.997 18:11:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.997 18:11:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:57.997 18:11:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:57.997 18:11:46 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:57.997 18:11:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:57.997 18:11:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:57.997 18:11:46 -- common/autotest_common.sh@10 -- # set +x 00:24:57.997 18:11:46 -- nvmf/common.sh@470 -- # nvmfpid=3386433 00:24:57.997 18:11:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:57.997 18:11:46 -- nvmf/common.sh@471 -- # waitforlisten 3386433 00:24:57.997 18:11:46 -- common/autotest_common.sh@817 -- # '[' -z 3386433 ']' 00:24:57.997 18:11:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.997 18:11:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:57.997 18:11:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.997 18:11:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:57.997 18:11:46 -- common/autotest_common.sh@10 -- # set +x 00:24:57.997 [2024-04-15 18:11:46.444375] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:24:57.997 [2024-04-15 18:11:46.444479] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.997 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.997 [2024-04-15 18:11:46.528242] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:57.997 [2024-04-15 18:11:46.627144] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.997 [2024-04-15 18:11:46.627214] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.997 [2024-04-15 18:11:46.627232] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.997 [2024-04-15 18:11:46.627246] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.997 [2024-04-15 18:11:46.627258] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.997 [2024-04-15 18:11:46.627342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.997 [2024-04-15 18:11:46.627399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.997 [2024-04-15 18:11:46.627449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:57.997 [2024-04-15 18:11:46.627451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.997 18:11:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:57.997 18:11:46 -- common/autotest_common.sh@850 -- # return 0 00:24:57.997 18:11:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:57.997 18:11:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:57.997 18:11:46 -- common/autotest_common.sh@10 -- # set +x 00:24:57.997 18:11:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.997 18:11:46 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:24:57.997 18:11:46 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:57.997 18:11:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.997 18:11:46 -- common/autotest_common.sh@10 -- # set +x 00:24:57.997 18:11:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.997 18:11:46 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:57.997 18:11:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.997 18:11:46 -- common/autotest_common.sh@10 -- # set +x 00:24:57.997 18:11:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.997 18:11:46 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:57.997 18:11:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.997 18:11:46 -- common/autotest_common.sh@10 -- # set +x 00:24:57.997 [2024-04-15 18:11:46.859128] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.997 18:11:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.997 18:11:46 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:57.997 18:11:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.997 18:11:46 -- common/autotest_common.sh@10 -- # set +x 00:24:57.997 Malloc1 00:24:57.997 18:11:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.997 18:11:46 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:57.997 18:11:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.998 18:11:46 -- common/autotest_common.sh@10 -- # set +x 00:24:57.998 18:11:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.998 18:11:46 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:57.998 18:11:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.998 18:11:46 -- common/autotest_common.sh@10 -- # set +x 00:24:57.998 18:11:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.998 18:11:46 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.998 18:11:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.998 18:11:46 -- common/autotest_common.sh@10 -- # set +x 00:24:57.998 [2024-04-15 18:11:46.912383] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.998 18:11:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.998 18:11:46 -- target/perf_adq.sh@73 -- # perfpid=3386474 00:24:57.998 18:11:46 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:57.998 18:11:46 -- target/perf_adq.sh@74 -- # sleep 2 00:24:57.998 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.533 18:11:48 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:25:00.533 18:11:48 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:00.533 18:11:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.533 18:11:48 -- common/autotest_common.sh@10 -- # set +x 00:25:00.533 18:11:48 -- target/perf_adq.sh@76 -- # wc -l 00:25:00.533 18:11:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.533 18:11:48 -- target/perf_adq.sh@76 -- # count=4 00:25:00.533 18:11:48 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:25:00.533 18:11:48 -- target/perf_adq.sh@81 -- # wait 3386474 00:25:08.646 Initializing NVMe Controllers 00:25:08.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:08.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:08.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:08.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:08.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:08.646 Initialization complete. Launching workers. 00:25:08.646 ======================================================== 00:25:08.646 Latency(us) 00:25:08.646 Device Information : IOPS MiB/s Average min max 00:25:08.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9731.10 38.01 6578.24 2866.33 9435.85 00:25:08.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9839.20 38.43 6506.55 2170.86 9918.34 00:25:08.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10130.50 39.57 6319.81 2320.95 8483.38 00:25:08.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9750.60 38.09 6563.92 2128.18 10774.37 00:25:08.647 ======================================================== 00:25:08.647 Total : 39451.39 154.11 6490.46 2128.18 10774.37 00:25:08.647 00:25:08.647 18:11:57 -- target/perf_adq.sh@82 -- # nvmftestfini 00:25:08.647 18:11:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:08.647 18:11:57 -- nvmf/common.sh@117 -- # sync 00:25:08.647 18:11:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:08.647 18:11:57 -- nvmf/common.sh@120 -- # set +e 00:25:08.647 18:11:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:08.647 18:11:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:08.647 rmmod nvme_tcp 00:25:08.647 rmmod nvme_fabrics 00:25:08.647 rmmod nvme_keyring 00:25:08.647 18:11:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:08.647 18:11:57 -- nvmf/common.sh@124 -- # set -e 00:25:08.647 18:11:57 -- nvmf/common.sh@125 -- # return 0 00:25:08.647 18:11:57 -- nvmf/common.sh@478 -- # '[' -n 3386433 ']' 00:25:08.647 18:11:57 -- nvmf/common.sh@479 -- # killprocess 3386433 00:25:08.647 18:11:57 -- common/autotest_common.sh@936 -- # '[' -z 3386433 ']' 00:25:08.647 18:11:57 -- common/autotest_common.sh@940 -- # kill -0 3386433 00:25:08.647 18:11:57 -- common/autotest_common.sh@941 -- # uname 00:25:08.647 18:11:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:08.647 18:11:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3386433 00:25:08.647 18:11:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:08.647 18:11:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:08.647 18:11:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3386433' 00:25:08.647 killing process with pid 3386433 00:25:08.647 18:11:57 -- common/autotest_common.sh@955 -- # kill 3386433 00:25:08.647 18:11:57 -- common/autotest_common.sh@960 -- # wait 3386433 00:25:08.647 18:11:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:08.647 18:11:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:08.647 18:11:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:08.647 18:11:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:08.647 18:11:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:08.647 18:11:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.647 18:11:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.647 18:11:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.549 18:11:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:10.549 18:11:59 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:25:10.549 18:11:59 -- target/perf_adq.sh@52 -- # rmmod ice 00:25:11.485 18:12:00 -- target/perf_adq.sh@53 -- # modprobe ice 00:25:12.863 18:12:01 -- target/perf_adq.sh@54 -- # sleep 5 00:25:18.134 18:12:06 -- target/perf_adq.sh@87 -- # nvmftestinit 00:25:18.134 18:12:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:18.134 18:12:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.134 18:12:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:18.134 18:12:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:18.134 18:12:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:18.134 18:12:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.134 18:12:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.134 18:12:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.134 18:12:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:18.134 18:12:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:18.134 18:12:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:18.134 18:12:06 -- common/autotest_common.sh@10 -- # set +x 00:25:18.134 18:12:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:18.134 18:12:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:18.134 18:12:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:18.134 18:12:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:18.134 18:12:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:18.134 18:12:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:18.134 18:12:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:18.134 18:12:06 -- nvmf/common.sh@295 -- # net_devs=() 00:25:18.134 18:12:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:18.134 18:12:06 -- nvmf/common.sh@296 -- # e810=() 00:25:18.134 18:12:06 -- nvmf/common.sh@296 -- # local -ga e810 00:25:18.134 18:12:06 -- nvmf/common.sh@297 -- # x722=() 00:25:18.134 18:12:06 -- nvmf/common.sh@297 -- # local -ga x722 00:25:18.134 18:12:06 -- nvmf/common.sh@298 -- # mlx=() 00:25:18.134 18:12:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:18.134 18:12:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.134 18:12:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.134 18:12:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.134 18:12:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.134 18:12:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.134 18:12:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.134 18:12:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.134 18:12:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.134 18:12:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.134 18:12:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.134 18:12:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.134 18:12:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:18.134 18:12:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:18.134 18:12:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:18.134 18:12:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:18.134 18:12:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:18.134 18:12:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:18.134 18:12:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.134 18:12:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:18.134 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:18.134 18:12:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.134 18:12:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.134 18:12:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.134 18:12:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.134 18:12:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.134 18:12:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.134 18:12:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:18.134 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:18.134 18:12:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.134 18:12:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.134 18:12:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.134 18:12:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.134 18:12:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.134 18:12:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:18.134 18:12:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:18.134 18:12:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:18.134 18:12:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.134 18:12:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.134 18:12:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:18.134 18:12:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.134 18:12:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:18.134 Found net devices under 0000:84:00.0: cvl_0_0 00:25:18.134 18:12:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.134 18:12:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.134 18:12:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.134 18:12:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:18.134 18:12:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.134 18:12:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:18.134 Found net devices under 0000:84:00.1: cvl_0_1 00:25:18.134 18:12:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.134 18:12:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:18.134 18:12:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:18.134 18:12:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:18.134 18:12:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:18.134 18:12:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:18.134 18:12:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.135 18:12:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.135 18:12:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.135 18:12:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:18.135 18:12:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.135 18:12:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.135 18:12:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:18.135 18:12:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.135 18:12:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.135 18:12:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:18.135 18:12:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:18.135 18:12:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.135 18:12:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.135 18:12:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.135 18:12:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.135 18:12:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:18.135 18:12:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.135 18:12:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.135 18:12:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.135 18:12:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:18.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:25:18.135 00:25:18.135 --- 10.0.0.2 ping statistics --- 00:25:18.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.135 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:25:18.135 18:12:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:25:18.135 00:25:18.135 --- 10.0.0.1 ping statistics --- 00:25:18.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.135 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:25:18.135 18:12:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.135 18:12:06 -- nvmf/common.sh@411 -- # return 0 00:25:18.135 18:12:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:18.135 18:12:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.135 18:12:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:18.135 18:12:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:18.135 18:12:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.135 18:12:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:18.135 18:12:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:18.135 18:12:06 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:25:18.135 18:12:06 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:25:18.135 18:12:06 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:25:18.135 18:12:06 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:25:18.135 net.core.busy_poll = 1 00:25:18.135 18:12:06 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:25:18.135 net.core.busy_read = 1 00:25:18.135 18:12:06 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:25:18.135 18:12:06 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:25:18.135 18:12:06 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:25:18.135 18:12:06 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:25:18.135 18:12:06 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:25:18.135 18:12:06 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:18.135 18:12:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:18.135 18:12:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:18.135 18:12:06 -- common/autotest_common.sh@10 -- # set +x 00:25:18.135 18:12:06 -- nvmf/common.sh@470 -- # nvmfpid=3389173 00:25:18.135 18:12:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:18.135 18:12:06 -- nvmf/common.sh@471 -- # waitforlisten 3389173 00:25:18.135 18:12:06 -- common/autotest_common.sh@817 -- # '[' -z 3389173 ']' 00:25:18.135 18:12:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.135 18:12:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:18.135 18:12:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.135 18:12:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:18.135 18:12:06 -- common/autotest_common.sh@10 -- # set +x 00:25:18.135 [2024-04-15 18:12:07.005928] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:25:18.135 [2024-04-15 18:12:07.006020] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.135 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.135 [2024-04-15 18:12:07.084336] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:18.393 [2024-04-15 18:12:07.182120] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.393 [2024-04-15 18:12:07.182189] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.393 [2024-04-15 18:12:07.182207] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.393 [2024-04-15 18:12:07.182221] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.393 [2024-04-15 18:12:07.182235] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.393 [2024-04-15 18:12:07.182297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.393 [2024-04-15 18:12:07.182352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:18.393 [2024-04-15 18:12:07.182405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:18.393 [2024-04-15 18:12:07.182408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.393 18:12:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:18.393 18:12:07 -- common/autotest_common.sh@850 -- # return 0 00:25:18.393 18:12:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:18.393 18:12:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:18.393 18:12:07 -- common/autotest_common.sh@10 -- # set +x 00:25:18.393 18:12:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.393 18:12:07 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:25:18.393 18:12:07 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:25:18.393 18:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.393 18:12:07 -- common/autotest_common.sh@10 -- # set +x 00:25:18.393 18:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.393 18:12:07 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:25:18.393 18:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.393 18:12:07 -- common/autotest_common.sh@10 -- # set +x 00:25:18.652 18:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.652 18:12:07 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:25:18.652 18:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.652 18:12:07 -- common/autotest_common.sh@10 -- # set +x 00:25:18.652 [2024-04-15 18:12:07.413195] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.652 18:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.652 18:12:07 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:18.652 18:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.652 18:12:07 -- common/autotest_common.sh@10 -- # set +x 00:25:18.652 Malloc1 00:25:18.652 18:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.652 18:12:07 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:18.652 18:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.652 18:12:07 -- common/autotest_common.sh@10 -- # set +x 00:25:18.652 18:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.652 18:12:07 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:18.652 18:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.652 18:12:07 -- common/autotest_common.sh@10 -- # set +x 00:25:18.652 18:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.652 18:12:07 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:18.652 18:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.652 18:12:07 -- common/autotest_common.sh@10 -- # set +x 00:25:18.652 [2024-04-15 18:12:07.465657] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.652 18:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.652 18:12:07 -- target/perf_adq.sh@94 -- # perfpid=3389206 00:25:18.652 18:12:07 -- target/perf_adq.sh@95 -- # sleep 2 00:25:18.652 18:12:07 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:18.652 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.586 18:12:09 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:25:20.586 18:12:09 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:25:20.586 18:12:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.586 18:12:09 -- target/perf_adq.sh@97 -- # wc -l 00:25:20.586 18:12:09 -- common/autotest_common.sh@10 -- # set +x 00:25:20.586 18:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.586 18:12:09 -- target/perf_adq.sh@97 -- # count=2 00:25:20.586 18:12:09 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:25:20.586 18:12:09 -- target/perf_adq.sh@103 -- # wait 3389206 00:25:28.691 Initializing NVMe Controllers 00:25:28.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:28.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:28.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:28.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:28.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:28.691 Initialization complete. Launching workers. 00:25:28.691 ======================================================== 00:25:28.691 Latency(us) 00:25:28.691 Device Information : IOPS MiB/s Average min max 00:25:28.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7345.80 28.69 8731.85 1265.45 53514.40 00:25:28.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6239.50 24.37 10261.42 1491.48 56668.73 00:25:28.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6902.40 26.96 9275.82 1695.90 54606.93 00:25:28.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6060.40 23.67 10595.59 1716.41 58086.84 00:25:28.691 ======================================================== 00:25:28.691 Total : 26548.09 103.70 9658.22 1265.45 58086.84 00:25:28.691 00:25:28.949 18:12:17 -- target/perf_adq.sh@104 -- # nvmftestfini 00:25:28.949 18:12:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:28.949 18:12:17 -- nvmf/common.sh@117 -- # sync 00:25:28.949 18:12:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:28.949 18:12:17 -- nvmf/common.sh@120 -- # set +e 00:25:28.949 18:12:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:28.949 18:12:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:28.949 rmmod nvme_tcp 00:25:28.949 rmmod nvme_fabrics 00:25:28.949 rmmod nvme_keyring 00:25:28.949 18:12:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:28.949 18:12:17 -- nvmf/common.sh@124 -- # set -e 00:25:28.949 18:12:17 -- nvmf/common.sh@125 -- # return 0 00:25:28.949 18:12:17 -- nvmf/common.sh@478 -- # '[' -n 3389173 ']' 00:25:28.949 18:12:17 -- nvmf/common.sh@479 -- # killprocess 3389173 00:25:28.949 18:12:17 -- common/autotest_common.sh@936 -- # '[' -z 3389173 ']' 00:25:28.949 18:12:17 -- common/autotest_common.sh@940 -- # kill -0 3389173 00:25:28.949 18:12:17 -- common/autotest_common.sh@941 -- # uname 00:25:28.949 18:12:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:28.949 18:12:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3389173 00:25:28.949 18:12:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:28.949 18:12:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:28.949 18:12:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3389173' 00:25:28.949 killing process with pid 3389173 00:25:28.949 18:12:17 -- common/autotest_common.sh@955 -- # kill 3389173 00:25:28.949 18:12:17 -- common/autotest_common.sh@960 -- # wait 3389173 00:25:29.207 18:12:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:29.207 18:12:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:29.207 18:12:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:29.207 18:12:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:29.207 18:12:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:29.207 18:12:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.207 18:12:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:29.207 18:12:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.488 18:12:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:32.488 18:12:21 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:25:32.488 00:25:32.488 real 0m44.274s 00:25:32.488 user 2m40.132s 00:25:32.488 sys 0m9.889s 00:25:32.488 18:12:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:32.488 18:12:21 -- common/autotest_common.sh@10 -- # set +x 00:25:32.488 ************************************ 00:25:32.488 END TEST nvmf_perf_adq 00:25:32.488 ************************************ 00:25:32.488 18:12:21 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:32.488 18:12:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:32.488 18:12:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:32.488 18:12:21 -- common/autotest_common.sh@10 -- # set +x 00:25:32.488 ************************************ 00:25:32.488 START TEST nvmf_shutdown 00:25:32.488 ************************************ 00:25:32.488 18:12:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:32.488 * Looking for test storage... 00:25:32.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:32.488 18:12:21 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.488 18:12:21 -- nvmf/common.sh@7 -- # uname -s 00:25:32.488 18:12:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.488 18:12:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.488 18:12:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.488 18:12:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.488 18:12:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.488 18:12:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.488 18:12:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.488 18:12:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.488 18:12:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.488 18:12:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.488 18:12:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:32.488 18:12:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:32.488 18:12:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.488 18:12:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.488 18:12:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.488 18:12:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.488 18:12:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.488 18:12:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.488 18:12:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.488 18:12:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.488 18:12:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.488 18:12:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.488 18:12:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.488 18:12:21 -- paths/export.sh@5 -- # export PATH 00:25:32.488 18:12:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.488 18:12:21 -- nvmf/common.sh@47 -- # : 0 00:25:32.488 18:12:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:32.488 18:12:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:32.488 18:12:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.488 18:12:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.488 18:12:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.488 18:12:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:32.488 18:12:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:32.488 18:12:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:32.488 18:12:21 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:32.488 18:12:21 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:32.488 18:12:21 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:32.488 18:12:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:32.488 18:12:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:32.488 18:12:21 -- common/autotest_common.sh@10 -- # set +x 00:25:32.488 ************************************ 00:25:32.489 START TEST nvmf_shutdown_tc1 00:25:32.489 ************************************ 00:25:32.489 18:12:21 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:25:32.489 18:12:21 -- target/shutdown.sh@74 -- # starttarget 00:25:32.489 18:12:21 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:32.489 18:12:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:32.489 18:12:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.489 18:12:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:32.489 18:12:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:32.489 18:12:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:32.489 18:12:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.489 18:12:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.489 18:12:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.489 18:12:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:32.489 18:12:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:32.489 18:12:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:32.489 18:12:21 -- common/autotest_common.sh@10 -- # set +x 00:25:35.028 18:12:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:35.028 18:12:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:35.028 18:12:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:35.028 18:12:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:35.028 18:12:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:35.028 18:12:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:35.028 18:12:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:35.028 18:12:23 -- nvmf/common.sh@295 -- # net_devs=() 00:25:35.028 18:12:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:35.028 18:12:23 -- nvmf/common.sh@296 -- # e810=() 00:25:35.028 18:12:23 -- nvmf/common.sh@296 -- # local -ga e810 00:25:35.028 18:12:23 -- nvmf/common.sh@297 -- # x722=() 00:25:35.028 18:12:23 -- nvmf/common.sh@297 -- # local -ga x722 00:25:35.028 18:12:23 -- nvmf/common.sh@298 -- # mlx=() 00:25:35.028 18:12:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:35.028 18:12:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:35.028 18:12:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:35.028 18:12:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:35.028 18:12:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:35.028 18:12:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:35.028 18:12:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:35.028 18:12:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:35.028 18:12:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:35.028 18:12:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:35.028 18:12:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:35.028 18:12:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:35.028 18:12:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:35.028 18:12:23 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:35.028 18:12:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:35.028 18:12:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:35.028 18:12:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:35.028 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:35.028 18:12:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:35.028 18:12:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:35.028 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:35.028 18:12:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:35.028 18:12:23 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:35.028 18:12:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.028 18:12:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:35.028 18:12:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.028 18:12:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:35.028 Found net devices under 0000:84:00.0: cvl_0_0 00:25:35.028 18:12:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.028 18:12:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:35.028 18:12:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.028 18:12:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:35.028 18:12:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.028 18:12:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:35.028 Found net devices under 0000:84:00.1: cvl_0_1 00:25:35.028 18:12:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.028 18:12:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:35.028 18:12:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:35.028 18:12:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:35.028 18:12:23 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:35.028 18:12:23 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:35.028 18:12:23 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:35.028 18:12:23 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:35.028 18:12:23 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:35.028 18:12:23 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:35.028 18:12:23 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:35.028 18:12:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:35.028 18:12:23 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:35.028 18:12:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:35.028 18:12:23 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:35.028 18:12:23 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:35.028 18:12:23 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:35.028 18:12:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:35.028 18:12:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:35.028 18:12:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:35.028 18:12:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:35.028 18:12:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:35.028 18:12:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:35.028 18:12:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:35.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:35.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:25:35.028 00:25:35.028 --- 10.0.0.2 ping statistics --- 00:25:35.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.028 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:25:35.028 18:12:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:35.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:35.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:25:35.028 00:25:35.028 --- 10.0.0.1 ping statistics --- 00:25:35.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.028 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:25:35.028 18:12:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:35.028 18:12:23 -- nvmf/common.sh@411 -- # return 0 00:25:35.028 18:12:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:35.028 18:12:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:35.028 18:12:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:35.028 18:12:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:35.028 18:12:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:35.028 18:12:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:35.028 18:12:23 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:35.028 18:12:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:35.028 18:12:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:35.028 18:12:23 -- common/autotest_common.sh@10 -- # set +x 00:25:35.028 18:12:23 -- nvmf/common.sh@470 -- # nvmfpid=3393144 00:25:35.028 18:12:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:35.028 18:12:23 -- nvmf/common.sh@471 -- # waitforlisten 3393144 00:25:35.028 18:12:23 -- common/autotest_common.sh@817 -- # '[' -z 3393144 ']' 00:25:35.028 18:12:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.028 18:12:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:35.028 18:12:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.028 18:12:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:35.028 18:12:23 -- common/autotest_common.sh@10 -- # set +x 00:25:35.028 [2024-04-15 18:12:23.806118] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:25:35.029 [2024-04-15 18:12:23.806210] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.029 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.029 [2024-04-15 18:12:23.883848] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:35.287 [2024-04-15 18:12:23.982283] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.287 [2024-04-15 18:12:23.982350] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.287 [2024-04-15 18:12:23.982366] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:35.287 [2024-04-15 18:12:23.982380] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:35.287 [2024-04-15 18:12:23.982392] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.287 [2024-04-15 18:12:23.982448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.287 [2024-04-15 18:12:23.982504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:35.287 [2024-04-15 18:12:23.982553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:35.287 [2024-04-15 18:12:23.982556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.287 18:12:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:35.287 18:12:24 -- common/autotest_common.sh@850 -- # return 0 00:25:35.287 18:12:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:35.287 18:12:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:35.287 18:12:24 -- common/autotest_common.sh@10 -- # set +x 00:25:35.287 18:12:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.287 18:12:24 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:35.287 18:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.287 18:12:24 -- common/autotest_common.sh@10 -- # set +x 00:25:35.287 [2024-04-15 18:12:24.148050] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.287 18:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.287 18:12:24 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:35.287 18:12:24 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:35.287 18:12:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:35.287 18:12:24 -- common/autotest_common.sh@10 -- # set +x 00:25:35.287 18:12:24 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:35.287 18:12:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:35.287 18:12:24 -- target/shutdown.sh@28 -- # cat 00:25:35.287 18:12:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:35.287 18:12:24 -- target/shutdown.sh@28 -- # cat 00:25:35.287 18:12:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:35.287 18:12:24 -- target/shutdown.sh@28 -- # cat 00:25:35.287 18:12:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:35.287 18:12:24 -- target/shutdown.sh@28 -- # cat 00:25:35.287 18:12:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:35.287 18:12:24 -- target/shutdown.sh@28 -- # cat 00:25:35.287 18:12:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:35.287 18:12:24 -- target/shutdown.sh@28 -- # cat 00:25:35.287 18:12:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:35.287 18:12:24 -- target/shutdown.sh@28 -- # cat 00:25:35.287 18:12:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:35.287 18:12:24 -- target/shutdown.sh@28 -- # cat 00:25:35.287 18:12:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:35.287 18:12:24 -- target/shutdown.sh@28 -- # cat 00:25:35.287 18:12:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:35.287 18:12:24 -- target/shutdown.sh@28 -- # cat 00:25:35.287 18:12:24 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:35.287 18:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.287 18:12:24 -- common/autotest_common.sh@10 -- # set +x 00:25:35.287 Malloc1 00:25:35.545 [2024-04-15 18:12:24.242657] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.545 Malloc2 00:25:35.545 Malloc3 00:25:35.545 Malloc4 00:25:35.545 Malloc5 00:25:35.545 Malloc6 00:25:35.804 Malloc7 00:25:35.804 Malloc8 00:25:35.804 Malloc9 00:25:35.804 Malloc10 00:25:35.804 18:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.804 18:12:24 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:35.804 18:12:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:35.804 18:12:24 -- common/autotest_common.sh@10 -- # set +x 00:25:35.804 18:12:24 -- target/shutdown.sh@78 -- # perfpid=3393209 00:25:35.804 18:12:24 -- target/shutdown.sh@79 -- # waitforlisten 3393209 /var/tmp/bdevperf.sock 00:25:35.804 18:12:24 -- common/autotest_common.sh@817 -- # '[' -z 3393209 ']' 00:25:35.804 18:12:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:35.804 18:12:24 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:35.804 18:12:24 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:35.804 18:12:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:35.804 18:12:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:35.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:35.804 18:12:24 -- nvmf/common.sh@521 -- # config=() 00:25:35.804 18:12:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:35.804 18:12:24 -- nvmf/common.sh@521 -- # local subsystem config 00:25:35.804 18:12:24 -- common/autotest_common.sh@10 -- # set +x 00:25:35.804 18:12:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:35.804 18:12:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:35.804 { 00:25:35.804 "params": { 00:25:35.804 "name": "Nvme$subsystem", 00:25:35.804 "trtype": "$TEST_TRANSPORT", 00:25:35.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.804 "adrfam": "ipv4", 00:25:35.804 "trsvcid": "$NVMF_PORT", 00:25:35.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.804 "hdgst": ${hdgst:-false}, 00:25:35.804 "ddgst": ${ddgst:-false} 00:25:35.804 }, 00:25:35.804 "method": "bdev_nvme_attach_controller" 00:25:35.804 } 00:25:35.804 EOF 00:25:35.804 )") 00:25:35.804 18:12:24 -- nvmf/common.sh@543 -- # cat 00:25:35.804 18:12:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:35.804 18:12:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:35.804 { 00:25:35.804 "params": { 00:25:35.804 "name": "Nvme$subsystem", 00:25:35.804 "trtype": "$TEST_TRANSPORT", 00:25:35.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.804 "adrfam": "ipv4", 00:25:35.804 "trsvcid": "$NVMF_PORT", 00:25:35.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.804 "hdgst": ${hdgst:-false}, 00:25:35.804 "ddgst": ${ddgst:-false} 00:25:35.804 }, 00:25:35.804 "method": "bdev_nvme_attach_controller" 00:25:35.804 } 00:25:35.804 EOF 00:25:35.804 )") 00:25:35.804 18:12:24 -- nvmf/common.sh@543 -- # cat 00:25:35.804 18:12:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:35.804 18:12:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:35.804 { 00:25:35.804 "params": { 00:25:35.804 "name": "Nvme$subsystem", 00:25:35.804 "trtype": "$TEST_TRANSPORT", 00:25:35.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.804 "adrfam": "ipv4", 00:25:35.804 "trsvcid": "$NVMF_PORT", 00:25:35.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.804 "hdgst": ${hdgst:-false}, 00:25:35.804 "ddgst": ${ddgst:-false} 00:25:35.804 }, 00:25:35.804 "method": "bdev_nvme_attach_controller" 00:25:35.804 } 00:25:35.804 EOF 00:25:35.804 )") 00:25:35.804 18:12:24 -- nvmf/common.sh@543 -- # cat 00:25:35.804 18:12:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:35.804 18:12:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:35.804 { 00:25:35.804 "params": { 00:25:35.804 "name": "Nvme$subsystem", 00:25:35.804 "trtype": "$TEST_TRANSPORT", 00:25:35.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.804 "adrfam": "ipv4", 00:25:35.804 "trsvcid": "$NVMF_PORT", 00:25:35.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.804 "hdgst": ${hdgst:-false}, 00:25:35.804 "ddgst": ${ddgst:-false} 00:25:35.804 }, 00:25:35.804 "method": "bdev_nvme_attach_controller" 00:25:35.804 } 00:25:35.804 EOF 00:25:35.804 )") 00:25:35.804 18:12:24 -- nvmf/common.sh@543 -- # cat 00:25:35.804 18:12:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:35.804 18:12:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:35.804 { 00:25:35.804 "params": { 00:25:35.804 "name": "Nvme$subsystem", 00:25:35.804 "trtype": "$TEST_TRANSPORT", 00:25:35.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.804 "adrfam": "ipv4", 00:25:35.804 "trsvcid": "$NVMF_PORT", 00:25:35.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.804 "hdgst": ${hdgst:-false}, 00:25:35.804 "ddgst": ${ddgst:-false} 00:25:35.804 }, 00:25:35.804 "method": "bdev_nvme_attach_controller" 00:25:35.804 } 00:25:35.804 EOF 00:25:35.804 )") 00:25:35.804 18:12:24 -- nvmf/common.sh@543 -- # cat 00:25:35.804 18:12:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:35.804 18:12:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:35.804 { 00:25:35.804 "params": { 00:25:35.804 "name": "Nvme$subsystem", 00:25:35.804 "trtype": "$TEST_TRANSPORT", 00:25:35.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.804 "adrfam": "ipv4", 00:25:35.804 "trsvcid": "$NVMF_PORT", 00:25:35.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.804 "hdgst": ${hdgst:-false}, 00:25:35.804 "ddgst": ${ddgst:-false} 00:25:35.804 }, 00:25:35.804 "method": "bdev_nvme_attach_controller" 00:25:35.804 } 00:25:35.804 EOF 00:25:35.804 )") 00:25:35.804 18:12:24 -- nvmf/common.sh@543 -- # cat 00:25:35.804 18:12:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:35.804 18:12:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:35.804 { 00:25:35.805 "params": { 00:25:35.805 "name": "Nvme$subsystem", 00:25:35.805 "trtype": "$TEST_TRANSPORT", 00:25:35.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.805 "adrfam": "ipv4", 00:25:35.805 "trsvcid": "$NVMF_PORT", 00:25:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.805 "hdgst": ${hdgst:-false}, 00:25:35.805 "ddgst": ${ddgst:-false} 00:25:35.805 }, 00:25:35.805 "method": "bdev_nvme_attach_controller" 00:25:35.805 } 00:25:35.805 EOF 00:25:35.805 )") 00:25:35.805 18:12:24 -- nvmf/common.sh@543 -- # cat 00:25:35.805 18:12:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:35.805 18:12:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:35.805 { 00:25:35.805 "params": { 00:25:35.805 "name": "Nvme$subsystem", 00:25:35.805 "trtype": "$TEST_TRANSPORT", 00:25:35.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.805 "adrfam": "ipv4", 00:25:35.805 "trsvcid": "$NVMF_PORT", 00:25:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.805 "hdgst": ${hdgst:-false}, 00:25:35.805 "ddgst": ${ddgst:-false} 00:25:35.805 }, 00:25:35.805 "method": "bdev_nvme_attach_controller" 00:25:35.805 } 00:25:35.805 EOF 00:25:35.805 )") 00:25:35.805 18:12:24 -- nvmf/common.sh@543 -- # cat 00:25:35.805 18:12:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:35.805 18:12:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:35.805 { 00:25:35.805 "params": { 00:25:35.805 "name": "Nvme$subsystem", 00:25:35.805 "trtype": "$TEST_TRANSPORT", 00:25:35.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.805 "adrfam": "ipv4", 00:25:35.805 "trsvcid": "$NVMF_PORT", 00:25:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.805 "hdgst": ${hdgst:-false}, 00:25:35.805 "ddgst": ${ddgst:-false} 00:25:35.805 }, 00:25:35.805 "method": "bdev_nvme_attach_controller" 00:25:35.805 } 00:25:35.805 EOF 00:25:35.805 )") 00:25:35.805 18:12:24 -- nvmf/common.sh@543 -- # cat 00:25:35.805 18:12:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:35.805 18:12:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:35.805 { 00:25:35.805 "params": { 00:25:35.805 "name": "Nvme$subsystem", 00:25:35.805 "trtype": "$TEST_TRANSPORT", 00:25:35.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.805 "adrfam": "ipv4", 00:25:35.805 "trsvcid": "$NVMF_PORT", 00:25:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.805 "hdgst": ${hdgst:-false}, 00:25:35.805 "ddgst": ${ddgst:-false} 00:25:35.805 }, 00:25:35.805 "method": "bdev_nvme_attach_controller" 00:25:35.805 } 00:25:35.805 EOF 00:25:35.805 )") 00:25:35.805 18:12:24 -- nvmf/common.sh@543 -- # cat 00:25:35.805 18:12:24 -- nvmf/common.sh@545 -- # jq . 00:25:35.805 18:12:24 -- nvmf/common.sh@546 -- # IFS=, 00:25:35.805 18:12:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:35.805 "params": { 00:25:35.805 "name": "Nvme1", 00:25:35.805 "trtype": "tcp", 00:25:35.805 "traddr": "10.0.0.2", 00:25:35.805 "adrfam": "ipv4", 00:25:35.805 "trsvcid": "4420", 00:25:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:35.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:35.805 "hdgst": false, 00:25:35.805 "ddgst": false 00:25:35.805 }, 00:25:35.805 "method": "bdev_nvme_attach_controller" 00:25:35.805 },{ 00:25:35.805 "params": { 00:25:35.805 "name": "Nvme2", 00:25:35.805 "trtype": "tcp", 00:25:35.805 "traddr": "10.0.0.2", 00:25:35.805 "adrfam": "ipv4", 00:25:35.805 "trsvcid": "4420", 00:25:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:35.805 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:35.805 "hdgst": false, 00:25:35.805 "ddgst": false 00:25:35.805 }, 00:25:35.805 "method": "bdev_nvme_attach_controller" 00:25:35.805 },{ 00:25:35.805 "params": { 00:25:35.805 "name": "Nvme3", 00:25:35.805 "trtype": "tcp", 00:25:35.805 "traddr": "10.0.0.2", 00:25:35.805 "adrfam": "ipv4", 00:25:35.805 "trsvcid": "4420", 00:25:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:35.805 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:35.805 "hdgst": false, 00:25:35.805 "ddgst": false 00:25:35.805 }, 00:25:35.805 "method": "bdev_nvme_attach_controller" 00:25:35.805 },{ 00:25:35.805 "params": { 00:25:35.805 "name": "Nvme4", 00:25:35.805 "trtype": "tcp", 00:25:35.805 "traddr": "10.0.0.2", 00:25:35.805 "adrfam": "ipv4", 00:25:35.805 "trsvcid": "4420", 00:25:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:35.805 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:35.805 "hdgst": false, 00:25:35.805 "ddgst": false 00:25:35.805 }, 00:25:35.805 "method": "bdev_nvme_attach_controller" 00:25:35.805 },{ 00:25:35.805 "params": { 00:25:35.805 "name": "Nvme5", 00:25:35.805 "trtype": "tcp", 00:25:35.805 "traddr": "10.0.0.2", 00:25:35.805 "adrfam": "ipv4", 00:25:35.805 "trsvcid": "4420", 00:25:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:35.805 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:35.805 "hdgst": false, 00:25:35.805 "ddgst": false 00:25:35.805 }, 00:25:35.805 "method": "bdev_nvme_attach_controller" 00:25:35.805 },{ 00:25:35.805 "params": { 00:25:35.805 "name": "Nvme6", 00:25:35.805 "trtype": "tcp", 00:25:35.805 "traddr": "10.0.0.2", 00:25:35.805 "adrfam": "ipv4", 00:25:35.805 "trsvcid": "4420", 00:25:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:35.805 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:35.805 "hdgst": false, 00:25:35.805 "ddgst": false 00:25:35.805 }, 00:25:35.805 "method": "bdev_nvme_attach_controller" 00:25:35.805 },{ 00:25:35.805 "params": { 00:25:35.805 "name": "Nvme7", 00:25:35.805 "trtype": "tcp", 00:25:35.805 "traddr": "10.0.0.2", 00:25:35.805 "adrfam": "ipv4", 00:25:35.805 "trsvcid": "4420", 00:25:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:35.805 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:35.805 "hdgst": false, 00:25:35.805 "ddgst": false 00:25:35.805 }, 00:25:35.805 "method": "bdev_nvme_attach_controller" 00:25:35.805 },{ 00:25:35.805 "params": { 00:25:35.805 "name": "Nvme8", 00:25:35.805 "trtype": "tcp", 00:25:35.805 "traddr": "10.0.0.2", 00:25:35.805 "adrfam": "ipv4", 00:25:35.805 "trsvcid": "4420", 00:25:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:35.805 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:35.805 "hdgst": false, 00:25:35.805 "ddgst": false 00:25:35.805 }, 00:25:35.805 "method": "bdev_nvme_attach_controller" 00:25:35.805 },{ 00:25:35.805 "params": { 00:25:35.805 "name": "Nvme9", 00:25:35.805 "trtype": "tcp", 00:25:35.805 "traddr": "10.0.0.2", 00:25:35.805 "adrfam": "ipv4", 00:25:35.805 "trsvcid": "4420", 00:25:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:35.805 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:35.805 "hdgst": false, 00:25:35.805 "ddgst": false 00:25:35.805 }, 00:25:35.805 "method": "bdev_nvme_attach_controller" 00:25:35.805 },{ 00:25:35.805 "params": { 00:25:35.805 "name": "Nvme10", 00:25:35.805 "trtype": "tcp", 00:25:35.805 "traddr": "10.0.0.2", 00:25:35.805 "adrfam": "ipv4", 00:25:35.805 "trsvcid": "4420", 00:25:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:35.805 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:35.805 "hdgst": false, 00:25:35.805 "ddgst": false 00:25:35.805 }, 00:25:35.805 "method": "bdev_nvme_attach_controller" 00:25:35.805 }' 00:25:35.805 [2024-04-15 18:12:24.752743] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:25:35.805 [2024-04-15 18:12:24.752829] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:36.063 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.063 [2024-04-15 18:12:24.823580] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.063 [2024-04-15 18:12:24.910804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.961 18:12:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:37.961 18:12:26 -- common/autotest_common.sh@850 -- # return 0 00:25:37.961 18:12:26 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:37.961 18:12:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.961 18:12:26 -- common/autotest_common.sh@10 -- # set +x 00:25:37.961 18:12:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.961 18:12:26 -- target/shutdown.sh@83 -- # kill -9 3393209 00:25:37.961 18:12:26 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:25:37.961 18:12:26 -- target/shutdown.sh@87 -- # sleep 1 00:25:39.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3393209 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:39.334 18:12:27 -- target/shutdown.sh@88 -- # kill -0 3393144 00:25:39.335 18:12:27 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:39.335 18:12:27 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:39.335 18:12:27 -- nvmf/common.sh@521 -- # config=() 00:25:39.335 18:12:27 -- nvmf/common.sh@521 -- # local subsystem config 00:25:39.335 18:12:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:39.335 { 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme$subsystem", 00:25:39.335 "trtype": "$TEST_TRANSPORT", 00:25:39.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "$NVMF_PORT", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.335 "hdgst": ${hdgst:-false}, 00:25:39.335 "ddgst": ${ddgst:-false} 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 } 00:25:39.335 EOF 00:25:39.335 )") 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # cat 00:25:39.335 18:12:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:39.335 { 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme$subsystem", 00:25:39.335 "trtype": "$TEST_TRANSPORT", 00:25:39.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "$NVMF_PORT", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.335 "hdgst": ${hdgst:-false}, 00:25:39.335 "ddgst": ${ddgst:-false} 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 } 00:25:39.335 EOF 00:25:39.335 )") 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # cat 00:25:39.335 18:12:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:39.335 { 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme$subsystem", 00:25:39.335 "trtype": "$TEST_TRANSPORT", 00:25:39.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "$NVMF_PORT", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.335 "hdgst": ${hdgst:-false}, 00:25:39.335 "ddgst": ${ddgst:-false} 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 } 00:25:39.335 EOF 00:25:39.335 )") 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # cat 00:25:39.335 18:12:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:39.335 { 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme$subsystem", 00:25:39.335 "trtype": "$TEST_TRANSPORT", 00:25:39.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "$NVMF_PORT", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.335 "hdgst": ${hdgst:-false}, 00:25:39.335 "ddgst": ${ddgst:-false} 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 } 00:25:39.335 EOF 00:25:39.335 )") 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # cat 00:25:39.335 18:12:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:39.335 { 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme$subsystem", 00:25:39.335 "trtype": "$TEST_TRANSPORT", 00:25:39.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "$NVMF_PORT", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.335 "hdgst": ${hdgst:-false}, 00:25:39.335 "ddgst": ${ddgst:-false} 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 } 00:25:39.335 EOF 00:25:39.335 )") 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # cat 00:25:39.335 18:12:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:39.335 { 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme$subsystem", 00:25:39.335 "trtype": "$TEST_TRANSPORT", 00:25:39.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "$NVMF_PORT", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.335 "hdgst": ${hdgst:-false}, 00:25:39.335 "ddgst": ${ddgst:-false} 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 } 00:25:39.335 EOF 00:25:39.335 )") 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # cat 00:25:39.335 18:12:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:39.335 { 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme$subsystem", 00:25:39.335 "trtype": "$TEST_TRANSPORT", 00:25:39.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "$NVMF_PORT", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.335 "hdgst": ${hdgst:-false}, 00:25:39.335 "ddgst": ${ddgst:-false} 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 } 00:25:39.335 EOF 00:25:39.335 )") 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # cat 00:25:39.335 18:12:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:39.335 { 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme$subsystem", 00:25:39.335 "trtype": "$TEST_TRANSPORT", 00:25:39.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "$NVMF_PORT", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.335 "hdgst": ${hdgst:-false}, 00:25:39.335 "ddgst": ${ddgst:-false} 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 } 00:25:39.335 EOF 00:25:39.335 )") 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # cat 00:25:39.335 18:12:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:39.335 { 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme$subsystem", 00:25:39.335 "trtype": "$TEST_TRANSPORT", 00:25:39.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "$NVMF_PORT", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.335 "hdgst": ${hdgst:-false}, 00:25:39.335 "ddgst": ${ddgst:-false} 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 } 00:25:39.335 EOF 00:25:39.335 )") 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # cat 00:25:39.335 18:12:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:39.335 { 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme$subsystem", 00:25:39.335 "trtype": "$TEST_TRANSPORT", 00:25:39.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "$NVMF_PORT", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.335 "hdgst": ${hdgst:-false}, 00:25:39.335 "ddgst": ${ddgst:-false} 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 } 00:25:39.335 EOF 00:25:39.335 )") 00:25:39.335 18:12:27 -- nvmf/common.sh@543 -- # cat 00:25:39.335 18:12:27 -- nvmf/common.sh@545 -- # jq . 00:25:39.335 18:12:27 -- nvmf/common.sh@546 -- # IFS=, 00:25:39.335 18:12:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme1", 00:25:39.335 "trtype": "tcp", 00:25:39.335 "traddr": "10.0.0.2", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "4420", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:39.335 "hdgst": false, 00:25:39.335 "ddgst": false 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 },{ 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme2", 00:25:39.335 "trtype": "tcp", 00:25:39.335 "traddr": "10.0.0.2", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "4420", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:39.335 "hdgst": false, 00:25:39.335 "ddgst": false 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 },{ 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme3", 00:25:39.335 "trtype": "tcp", 00:25:39.335 "traddr": "10.0.0.2", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "4420", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:39.335 "hdgst": false, 00:25:39.335 "ddgst": false 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 },{ 00:25:39.336 "params": { 00:25:39.336 "name": "Nvme4", 00:25:39.336 "trtype": "tcp", 00:25:39.336 "traddr": "10.0.0.2", 00:25:39.336 "adrfam": "ipv4", 00:25:39.336 "trsvcid": "4420", 00:25:39.336 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:39.336 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:39.336 "hdgst": false, 00:25:39.336 "ddgst": false 00:25:39.336 }, 00:25:39.336 "method": "bdev_nvme_attach_controller" 00:25:39.336 },{ 00:25:39.336 "params": { 00:25:39.336 "name": "Nvme5", 00:25:39.336 "trtype": "tcp", 00:25:39.336 "traddr": "10.0.0.2", 00:25:39.336 "adrfam": "ipv4", 00:25:39.336 "trsvcid": "4420", 00:25:39.336 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:39.336 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:39.336 "hdgst": false, 00:25:39.336 "ddgst": false 00:25:39.336 }, 00:25:39.336 "method": "bdev_nvme_attach_controller" 00:25:39.336 },{ 00:25:39.336 "params": { 00:25:39.336 "name": "Nvme6", 00:25:39.336 "trtype": "tcp", 00:25:39.336 "traddr": "10.0.0.2", 00:25:39.336 "adrfam": "ipv4", 00:25:39.336 "trsvcid": "4420", 00:25:39.336 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:39.336 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:39.336 "hdgst": false, 00:25:39.336 "ddgst": false 00:25:39.336 }, 00:25:39.336 "method": "bdev_nvme_attach_controller" 00:25:39.336 },{ 00:25:39.336 "params": { 00:25:39.336 "name": "Nvme7", 00:25:39.336 "trtype": "tcp", 00:25:39.336 "traddr": "10.0.0.2", 00:25:39.336 "adrfam": "ipv4", 00:25:39.336 "trsvcid": "4420", 00:25:39.336 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:39.336 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:39.336 "hdgst": false, 00:25:39.336 "ddgst": false 00:25:39.336 }, 00:25:39.336 "method": "bdev_nvme_attach_controller" 00:25:39.336 },{ 00:25:39.336 "params": { 00:25:39.336 "name": "Nvme8", 00:25:39.336 "trtype": "tcp", 00:25:39.336 "traddr": "10.0.0.2", 00:25:39.336 "adrfam": "ipv4", 00:25:39.336 "trsvcid": "4420", 00:25:39.336 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:39.336 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:39.336 "hdgst": false, 00:25:39.336 "ddgst": false 00:25:39.336 }, 00:25:39.336 "method": "bdev_nvme_attach_controller" 00:25:39.336 },{ 00:25:39.336 "params": { 00:25:39.336 "name": "Nvme9", 00:25:39.336 "trtype": "tcp", 00:25:39.336 "traddr": "10.0.0.2", 00:25:39.336 "adrfam": "ipv4", 00:25:39.336 "trsvcid": "4420", 00:25:39.336 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:39.336 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:39.336 "hdgst": false, 00:25:39.336 "ddgst": false 00:25:39.336 }, 00:25:39.336 "method": "bdev_nvme_attach_controller" 00:25:39.336 },{ 00:25:39.336 "params": { 00:25:39.336 "name": "Nvme10", 00:25:39.336 "trtype": "tcp", 00:25:39.336 "traddr": "10.0.0.2", 00:25:39.336 "adrfam": "ipv4", 00:25:39.336 "trsvcid": "4420", 00:25:39.336 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:39.336 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:39.336 "hdgst": false, 00:25:39.336 "ddgst": false 00:25:39.336 }, 00:25:39.336 "method": "bdev_nvme_attach_controller" 00:25:39.336 }' 00:25:39.336 [2024-04-15 18:12:27.945658] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:25:39.336 [2024-04-15 18:12:27.945757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3393627 ] 00:25:39.336 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.336 [2024-04-15 18:12:28.023157] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.336 [2024-04-15 18:12:28.112672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.708 Running I/O for 1 seconds... 00:25:42.083 00:25:42.083 Latency(us) 00:25:42.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.083 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.083 Verification LBA range: start 0x0 length 0x400 00:25:42.083 Nvme1n1 : 1.03 187.10 11.69 0.00 0.00 338518.47 20971.52 270299.59 00:25:42.083 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.083 Verification LBA range: start 0x0 length 0x400 00:25:42.083 Nvme2n1 : 1.11 234.39 14.65 0.00 0.00 260924.21 18447.17 262532.36 00:25:42.083 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.083 Verification LBA range: start 0x0 length 0x400 00:25:42.083 Nvme3n1 : 1.14 224.00 14.00 0.00 0.00 273748.57 18738.44 264085.81 00:25:42.083 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.083 Verification LBA range: start 0x0 length 0x400 00:25:42.083 Nvme4n1 : 1.15 222.53 13.91 0.00 0.00 271063.80 19806.44 267192.70 00:25:42.083 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.083 Verification LBA range: start 0x0 length 0x400 00:25:42.083 Nvme5n1 : 1.17 218.47 13.65 0.00 0.00 271765.43 18058.81 281173.71 00:25:42.083 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.083 Verification LBA range: start 0x0 length 0x400 00:25:42.083 Nvme6n1 : 1.16 220.69 13.79 0.00 0.00 264315.45 20291.89 271853.04 00:25:42.083 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.083 Verification LBA range: start 0x0 length 0x400 00:25:42.083 Nvme7n1 : 1.16 220.23 13.76 0.00 0.00 260493.46 17185.00 265639.25 00:25:42.083 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.083 Verification LBA range: start 0x0 length 0x400 00:25:42.083 Nvme8n1 : 1.18 274.45 17.15 0.00 0.00 205627.29 3106.89 262532.36 00:25:42.083 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.083 Verification LBA range: start 0x0 length 0x400 00:25:42.083 Nvme9n1 : 1.18 270.04 16.88 0.00 0.00 205817.93 17864.63 242337.56 00:25:42.083 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.083 Verification LBA range: start 0x0 length 0x400 00:25:42.083 Nvme10n1 : 1.18 217.63 13.60 0.00 0.00 250703.27 20971.52 293601.28 00:25:42.083 =================================================================================================================== 00:25:42.083 Total : 2289.55 143.10 0.00 0.00 255659.52 3106.89 293601.28 00:25:42.083 18:12:31 -- target/shutdown.sh@93 -- # stoptarget 00:25:42.083 18:12:31 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:42.083 18:12:31 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:42.083 18:12:31 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:42.083 18:12:31 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:42.083 18:12:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:42.083 18:12:31 -- nvmf/common.sh@117 -- # sync 00:25:42.083 18:12:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:42.083 18:12:31 -- nvmf/common.sh@120 -- # set +e 00:25:42.083 18:12:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:42.083 18:12:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:42.083 rmmod nvme_tcp 00:25:42.341 rmmod nvme_fabrics 00:25:42.341 rmmod nvme_keyring 00:25:42.341 18:12:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:42.341 18:12:31 -- nvmf/common.sh@124 -- # set -e 00:25:42.341 18:12:31 -- nvmf/common.sh@125 -- # return 0 00:25:42.341 18:12:31 -- nvmf/common.sh@478 -- # '[' -n 3393144 ']' 00:25:42.341 18:12:31 -- nvmf/common.sh@479 -- # killprocess 3393144 00:25:42.341 18:12:31 -- common/autotest_common.sh@936 -- # '[' -z 3393144 ']' 00:25:42.341 18:12:31 -- common/autotest_common.sh@940 -- # kill -0 3393144 00:25:42.341 18:12:31 -- common/autotest_common.sh@941 -- # uname 00:25:42.341 18:12:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:42.341 18:12:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3393144 00:25:42.341 18:12:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:42.341 18:12:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:42.341 18:12:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3393144' 00:25:42.341 killing process with pid 3393144 00:25:42.341 18:12:31 -- common/autotest_common.sh@955 -- # kill 3393144 00:25:42.341 18:12:31 -- common/autotest_common.sh@960 -- # wait 3393144 00:25:42.948 18:12:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:42.948 18:12:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:42.948 18:12:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:42.948 18:12:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:42.948 18:12:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:42.948 18:12:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.948 18:12:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.948 18:12:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.849 18:12:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:44.849 00:25:44.849 real 0m12.309s 00:25:44.849 user 0m35.441s 00:25:44.849 sys 0m3.602s 00:25:44.849 18:12:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:44.849 18:12:33 -- common/autotest_common.sh@10 -- # set +x 00:25:44.849 ************************************ 00:25:44.849 END TEST nvmf_shutdown_tc1 00:25:44.849 ************************************ 00:25:44.849 18:12:33 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:44.849 18:12:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:44.849 18:12:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:44.849 18:12:33 -- common/autotest_common.sh@10 -- # set +x 00:25:44.849 ************************************ 00:25:44.849 START TEST nvmf_shutdown_tc2 00:25:44.849 ************************************ 00:25:44.849 18:12:33 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:25:44.849 18:12:33 -- target/shutdown.sh@98 -- # starttarget 00:25:44.849 18:12:33 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:44.849 18:12:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:44.849 18:12:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.849 18:12:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:44.849 18:12:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:44.849 18:12:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:44.849 18:12:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.849 18:12:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.849 18:12:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.107 18:12:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:45.107 18:12:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:45.107 18:12:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:45.107 18:12:33 -- common/autotest_common.sh@10 -- # set +x 00:25:45.107 18:12:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:45.107 18:12:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:45.107 18:12:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:45.107 18:12:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:45.107 18:12:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:45.107 18:12:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:45.107 18:12:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:45.107 18:12:33 -- nvmf/common.sh@295 -- # net_devs=() 00:25:45.107 18:12:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:45.107 18:12:33 -- nvmf/common.sh@296 -- # e810=() 00:25:45.107 18:12:33 -- nvmf/common.sh@296 -- # local -ga e810 00:25:45.107 18:12:33 -- nvmf/common.sh@297 -- # x722=() 00:25:45.107 18:12:33 -- nvmf/common.sh@297 -- # local -ga x722 00:25:45.107 18:12:33 -- nvmf/common.sh@298 -- # mlx=() 00:25:45.107 18:12:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:45.107 18:12:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.107 18:12:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.107 18:12:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.107 18:12:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.107 18:12:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.107 18:12:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.107 18:12:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.107 18:12:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.107 18:12:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.107 18:12:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.107 18:12:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.107 18:12:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:45.107 18:12:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:45.107 18:12:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:45.107 18:12:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:45.107 18:12:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:45.107 18:12:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:45.107 18:12:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.107 18:12:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:45.107 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:45.107 18:12:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.107 18:12:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.107 18:12:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.107 18:12:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.107 18:12:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.107 18:12:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.107 18:12:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:45.107 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:45.107 18:12:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.107 18:12:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.107 18:12:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.107 18:12:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.107 18:12:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.107 18:12:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:45.107 18:12:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:45.107 18:12:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:45.107 18:12:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.107 18:12:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.107 18:12:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:45.107 18:12:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.107 18:12:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:45.107 Found net devices under 0000:84:00.0: cvl_0_0 00:25:45.107 18:12:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.107 18:12:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.107 18:12:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.107 18:12:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:45.107 18:12:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.107 18:12:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:45.107 Found net devices under 0000:84:00.1: cvl_0_1 00:25:45.107 18:12:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.107 18:12:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:45.107 18:12:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:45.107 18:12:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:45.107 18:12:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:45.107 18:12:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:45.107 18:12:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.108 18:12:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.108 18:12:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.108 18:12:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:45.108 18:12:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.108 18:12:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.108 18:12:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:45.108 18:12:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.108 18:12:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.108 18:12:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:45.108 18:12:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:45.108 18:12:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.108 18:12:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.108 18:12:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.108 18:12:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.108 18:12:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:45.108 18:12:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.108 18:12:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.108 18:12:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.108 18:12:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:45.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:25:45.108 00:25:45.108 --- 10.0.0.2 ping statistics --- 00:25:45.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.108 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:25:45.108 18:12:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:25:45.108 00:25:45.108 --- 10.0.0.1 ping statistics --- 00:25:45.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.108 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:25:45.108 18:12:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.108 18:12:33 -- nvmf/common.sh@411 -- # return 0 00:25:45.108 18:12:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:45.108 18:12:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.108 18:12:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:45.108 18:12:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:45.108 18:12:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.108 18:12:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:45.108 18:12:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:45.108 18:12:33 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:45.108 18:12:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:45.108 18:12:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:45.108 18:12:33 -- common/autotest_common.sh@10 -- # set +x 00:25:45.108 18:12:33 -- nvmf/common.sh@470 -- # nvmfpid=3394403 00:25:45.108 18:12:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:45.108 18:12:34 -- nvmf/common.sh@471 -- # waitforlisten 3394403 00:25:45.108 18:12:34 -- common/autotest_common.sh@817 -- # '[' -z 3394403 ']' 00:25:45.108 18:12:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.108 18:12:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:45.108 18:12:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.108 18:12:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:45.108 18:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:45.367 [2024-04-15 18:12:34.093700] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:25:45.367 [2024-04-15 18:12:34.093869] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.367 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.367 [2024-04-15 18:12:34.206146] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:45.367 [2024-04-15 18:12:34.305400] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.367 [2024-04-15 18:12:34.305470] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.367 [2024-04-15 18:12:34.305488] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.367 [2024-04-15 18:12:34.305502] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.367 [2024-04-15 18:12:34.305514] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.367 [2024-04-15 18:12:34.305608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.367 [2024-04-15 18:12:34.305665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:45.367 [2024-04-15 18:12:34.305722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.367 [2024-04-15 18:12:34.305719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:46.303 18:12:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:46.303 18:12:35 -- common/autotest_common.sh@850 -- # return 0 00:25:46.303 18:12:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:46.303 18:12:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:46.303 18:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:46.303 18:12:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.303 18:12:35 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:46.303 18:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.303 18:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:46.303 [2024-04-15 18:12:35.152294] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.303 18:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.303 18:12:35 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:46.303 18:12:35 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:46.303 18:12:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:46.303 18:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:46.303 18:12:35 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:46.303 18:12:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.303 18:12:35 -- target/shutdown.sh@28 -- # cat 00:25:46.303 18:12:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.303 18:12:35 -- target/shutdown.sh@28 -- # cat 00:25:46.303 18:12:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.303 18:12:35 -- target/shutdown.sh@28 -- # cat 00:25:46.303 18:12:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.303 18:12:35 -- target/shutdown.sh@28 -- # cat 00:25:46.303 18:12:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.303 18:12:35 -- target/shutdown.sh@28 -- # cat 00:25:46.303 18:12:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.303 18:12:35 -- target/shutdown.sh@28 -- # cat 00:25:46.303 18:12:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.303 18:12:35 -- target/shutdown.sh@28 -- # cat 00:25:46.303 18:12:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.303 18:12:35 -- target/shutdown.sh@28 -- # cat 00:25:46.303 18:12:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.303 18:12:35 -- target/shutdown.sh@28 -- # cat 00:25:46.303 18:12:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.303 18:12:35 -- target/shutdown.sh@28 -- # cat 00:25:46.303 18:12:35 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:46.303 18:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.303 18:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:46.303 Malloc1 00:25:46.303 [2024-04-15 18:12:35.236882] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.562 Malloc2 00:25:46.562 Malloc3 00:25:46.562 Malloc4 00:25:46.562 Malloc5 00:25:46.562 Malloc6 00:25:46.562 Malloc7 00:25:46.820 Malloc8 00:25:46.820 Malloc9 00:25:46.820 Malloc10 00:25:46.820 18:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.820 18:12:35 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:46.820 18:12:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:46.820 18:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:46.820 18:12:35 -- target/shutdown.sh@102 -- # perfpid=3394711 00:25:46.820 18:12:35 -- target/shutdown.sh@103 -- # waitforlisten 3394711 /var/tmp/bdevperf.sock 00:25:46.820 18:12:35 -- common/autotest_common.sh@817 -- # '[' -z 3394711 ']' 00:25:46.820 18:12:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:46.820 18:12:35 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:46.820 18:12:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:46.820 18:12:35 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:46.820 18:12:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:46.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:46.820 18:12:35 -- nvmf/common.sh@521 -- # config=() 00:25:46.820 18:12:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:46.820 18:12:35 -- nvmf/common.sh@521 -- # local subsystem config 00:25:46.820 18:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:46.820 18:12:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:46.820 18:12:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:46.820 { 00:25:46.820 "params": { 00:25:46.820 "name": "Nvme$subsystem", 00:25:46.820 "trtype": "$TEST_TRANSPORT", 00:25:46.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.820 "adrfam": "ipv4", 00:25:46.820 "trsvcid": "$NVMF_PORT", 00:25:46.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.820 "hdgst": ${hdgst:-false}, 00:25:46.820 "ddgst": ${ddgst:-false} 00:25:46.820 }, 00:25:46.820 "method": "bdev_nvme_attach_controller" 00:25:46.820 } 00:25:46.820 EOF 00:25:46.820 )") 00:25:46.820 18:12:35 -- nvmf/common.sh@543 -- # cat 00:25:46.820 18:12:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:46.820 18:12:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:46.820 { 00:25:46.820 "params": { 00:25:46.820 "name": "Nvme$subsystem", 00:25:46.820 "trtype": "$TEST_TRANSPORT", 00:25:46.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.820 "adrfam": "ipv4", 00:25:46.820 "trsvcid": "$NVMF_PORT", 00:25:46.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.820 "hdgst": ${hdgst:-false}, 00:25:46.820 "ddgst": ${ddgst:-false} 00:25:46.820 }, 00:25:46.820 "method": "bdev_nvme_attach_controller" 00:25:46.820 } 00:25:46.820 EOF 00:25:46.820 )") 00:25:46.820 18:12:35 -- nvmf/common.sh@543 -- # cat 00:25:46.820 18:12:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:46.820 18:12:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:46.820 { 00:25:46.820 "params": { 00:25:46.820 "name": "Nvme$subsystem", 00:25:46.820 "trtype": "$TEST_TRANSPORT", 00:25:46.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.820 "adrfam": "ipv4", 00:25:46.820 "trsvcid": "$NVMF_PORT", 00:25:46.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.820 "hdgst": ${hdgst:-false}, 00:25:46.820 "ddgst": ${ddgst:-false} 00:25:46.820 }, 00:25:46.820 "method": "bdev_nvme_attach_controller" 00:25:46.820 } 00:25:46.820 EOF 00:25:46.820 )") 00:25:46.820 18:12:35 -- nvmf/common.sh@543 -- # cat 00:25:46.820 18:12:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:46.820 18:12:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:46.820 { 00:25:46.820 "params": { 00:25:46.820 "name": "Nvme$subsystem", 00:25:46.820 "trtype": "$TEST_TRANSPORT", 00:25:46.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.820 "adrfam": "ipv4", 00:25:46.820 "trsvcid": "$NVMF_PORT", 00:25:46.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.820 "hdgst": ${hdgst:-false}, 00:25:46.820 "ddgst": ${ddgst:-false} 00:25:46.820 }, 00:25:46.820 "method": "bdev_nvme_attach_controller" 00:25:46.820 } 00:25:46.820 EOF 00:25:46.820 )") 00:25:46.820 18:12:35 -- nvmf/common.sh@543 -- # cat 00:25:46.820 18:12:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:46.820 18:12:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:46.820 { 00:25:46.820 "params": { 00:25:46.820 "name": "Nvme$subsystem", 00:25:46.820 "trtype": "$TEST_TRANSPORT", 00:25:46.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.820 "adrfam": "ipv4", 00:25:46.820 "trsvcid": "$NVMF_PORT", 00:25:46.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.820 "hdgst": ${hdgst:-false}, 00:25:46.820 "ddgst": ${ddgst:-false} 00:25:46.820 }, 00:25:46.820 "method": "bdev_nvme_attach_controller" 00:25:46.821 } 00:25:46.821 EOF 00:25:46.821 )") 00:25:46.821 18:12:35 -- nvmf/common.sh@543 -- # cat 00:25:46.821 18:12:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:46.821 18:12:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:46.821 { 00:25:46.821 "params": { 00:25:46.821 "name": "Nvme$subsystem", 00:25:46.821 "trtype": "$TEST_TRANSPORT", 00:25:46.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.821 "adrfam": "ipv4", 00:25:46.821 "trsvcid": "$NVMF_PORT", 00:25:46.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.821 "hdgst": ${hdgst:-false}, 00:25:46.821 "ddgst": ${ddgst:-false} 00:25:46.821 }, 00:25:46.821 "method": "bdev_nvme_attach_controller" 00:25:46.821 } 00:25:46.821 EOF 00:25:46.821 )") 00:25:46.821 18:12:35 -- nvmf/common.sh@543 -- # cat 00:25:46.821 18:12:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:46.821 18:12:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:46.821 { 00:25:46.821 "params": { 00:25:46.821 "name": "Nvme$subsystem", 00:25:46.821 "trtype": "$TEST_TRANSPORT", 00:25:46.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.821 "adrfam": "ipv4", 00:25:46.821 "trsvcid": "$NVMF_PORT", 00:25:46.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.821 "hdgst": ${hdgst:-false}, 00:25:46.821 "ddgst": ${ddgst:-false} 00:25:46.821 }, 00:25:46.821 "method": "bdev_nvme_attach_controller" 00:25:46.821 } 00:25:46.821 EOF 00:25:46.821 )") 00:25:46.821 18:12:35 -- nvmf/common.sh@543 -- # cat 00:25:46.821 18:12:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:46.821 18:12:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:46.821 { 00:25:46.821 "params": { 00:25:46.821 "name": "Nvme$subsystem", 00:25:46.821 "trtype": "$TEST_TRANSPORT", 00:25:46.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.821 "adrfam": "ipv4", 00:25:46.821 "trsvcid": "$NVMF_PORT", 00:25:46.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.821 "hdgst": ${hdgst:-false}, 00:25:46.821 "ddgst": ${ddgst:-false} 00:25:46.821 }, 00:25:46.821 "method": "bdev_nvme_attach_controller" 00:25:46.821 } 00:25:46.821 EOF 00:25:46.821 )") 00:25:46.821 18:12:35 -- nvmf/common.sh@543 -- # cat 00:25:46.821 18:12:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:46.821 18:12:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:46.821 { 00:25:46.821 "params": { 00:25:46.821 "name": "Nvme$subsystem", 00:25:46.821 "trtype": "$TEST_TRANSPORT", 00:25:46.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.821 "adrfam": "ipv4", 00:25:46.821 "trsvcid": "$NVMF_PORT", 00:25:46.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.821 "hdgst": ${hdgst:-false}, 00:25:46.821 "ddgst": ${ddgst:-false} 00:25:46.821 }, 00:25:46.821 "method": "bdev_nvme_attach_controller" 00:25:46.821 } 00:25:46.821 EOF 00:25:46.821 )") 00:25:46.821 18:12:35 -- nvmf/common.sh@543 -- # cat 00:25:46.821 18:12:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:46.821 18:12:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:46.821 { 00:25:46.821 "params": { 00:25:46.821 "name": "Nvme$subsystem", 00:25:46.821 "trtype": "$TEST_TRANSPORT", 00:25:46.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.821 "adrfam": "ipv4", 00:25:46.821 "trsvcid": "$NVMF_PORT", 00:25:46.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.821 "hdgst": ${hdgst:-false}, 00:25:46.821 "ddgst": ${ddgst:-false} 00:25:46.821 }, 00:25:46.821 "method": "bdev_nvme_attach_controller" 00:25:46.821 } 00:25:46.821 EOF 00:25:46.821 )") 00:25:46.821 18:12:35 -- nvmf/common.sh@543 -- # cat 00:25:46.821 18:12:35 -- nvmf/common.sh@545 -- # jq . 00:25:46.821 18:12:35 -- nvmf/common.sh@546 -- # IFS=, 00:25:46.821 18:12:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:46.821 "params": { 00:25:46.821 "name": "Nvme1", 00:25:46.821 "trtype": "tcp", 00:25:46.821 "traddr": "10.0.0.2", 00:25:46.821 "adrfam": "ipv4", 00:25:46.821 "trsvcid": "4420", 00:25:46.821 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.821 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:46.821 "hdgst": false, 00:25:46.821 "ddgst": false 00:25:46.821 }, 00:25:46.821 "method": "bdev_nvme_attach_controller" 00:25:46.821 },{ 00:25:46.821 "params": { 00:25:46.821 "name": "Nvme2", 00:25:46.821 "trtype": "tcp", 00:25:46.821 "traddr": "10.0.0.2", 00:25:46.821 "adrfam": "ipv4", 00:25:46.821 "trsvcid": "4420", 00:25:46.821 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:46.821 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:46.821 "hdgst": false, 00:25:46.821 "ddgst": false 00:25:46.821 }, 00:25:46.821 "method": "bdev_nvme_attach_controller" 00:25:46.821 },{ 00:25:46.821 "params": { 00:25:46.821 "name": "Nvme3", 00:25:46.821 "trtype": "tcp", 00:25:46.821 "traddr": "10.0.0.2", 00:25:46.821 "adrfam": "ipv4", 00:25:46.821 "trsvcid": "4420", 00:25:46.821 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:46.821 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:46.821 "hdgst": false, 00:25:46.821 "ddgst": false 00:25:46.821 }, 00:25:46.821 "method": "bdev_nvme_attach_controller" 00:25:46.821 },{ 00:25:46.821 "params": { 00:25:46.821 "name": "Nvme4", 00:25:46.821 "trtype": "tcp", 00:25:46.821 "traddr": "10.0.0.2", 00:25:46.821 "adrfam": "ipv4", 00:25:46.821 "trsvcid": "4420", 00:25:46.821 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:46.821 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:46.821 "hdgst": false, 00:25:46.821 "ddgst": false 00:25:46.821 }, 00:25:46.821 "method": "bdev_nvme_attach_controller" 00:25:46.821 },{ 00:25:46.821 "params": { 00:25:46.821 "name": "Nvme5", 00:25:46.821 "trtype": "tcp", 00:25:46.821 "traddr": "10.0.0.2", 00:25:46.821 "adrfam": "ipv4", 00:25:46.821 "trsvcid": "4420", 00:25:46.821 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:46.821 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:46.821 "hdgst": false, 00:25:46.821 "ddgst": false 00:25:46.821 }, 00:25:46.821 "method": "bdev_nvme_attach_controller" 00:25:46.821 },{ 00:25:46.821 "params": { 00:25:46.821 "name": "Nvme6", 00:25:46.821 "trtype": "tcp", 00:25:46.821 "traddr": "10.0.0.2", 00:25:46.821 "adrfam": "ipv4", 00:25:46.821 "trsvcid": "4420", 00:25:46.821 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:46.821 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:46.821 "hdgst": false, 00:25:46.821 "ddgst": false 00:25:46.821 }, 00:25:46.821 "method": "bdev_nvme_attach_controller" 00:25:46.821 },{ 00:25:46.821 "params": { 00:25:46.821 "name": "Nvme7", 00:25:46.821 "trtype": "tcp", 00:25:46.821 "traddr": "10.0.0.2", 00:25:46.821 "adrfam": "ipv4", 00:25:46.821 "trsvcid": "4420", 00:25:46.821 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:46.821 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:46.821 "hdgst": false, 00:25:46.821 "ddgst": false 00:25:46.821 }, 00:25:46.821 "method": "bdev_nvme_attach_controller" 00:25:46.821 },{ 00:25:46.821 "params": { 00:25:46.821 "name": "Nvme8", 00:25:46.821 "trtype": "tcp", 00:25:46.821 "traddr": "10.0.0.2", 00:25:46.821 "adrfam": "ipv4", 00:25:46.821 "trsvcid": "4420", 00:25:46.821 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:46.821 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:46.821 "hdgst": false, 00:25:46.821 "ddgst": false 00:25:46.821 }, 00:25:46.821 "method": "bdev_nvme_attach_controller" 00:25:46.821 },{ 00:25:46.821 "params": { 00:25:46.821 "name": "Nvme9", 00:25:46.821 "trtype": "tcp", 00:25:46.821 "traddr": "10.0.0.2", 00:25:46.821 "adrfam": "ipv4", 00:25:46.821 "trsvcid": "4420", 00:25:46.821 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:46.821 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:46.821 "hdgst": false, 00:25:46.821 "ddgst": false 00:25:46.821 }, 00:25:46.821 "method": "bdev_nvme_attach_controller" 00:25:46.821 },{ 00:25:46.821 "params": { 00:25:46.821 "name": "Nvme10", 00:25:46.821 "trtype": "tcp", 00:25:46.821 "traddr": "10.0.0.2", 00:25:46.821 "adrfam": "ipv4", 00:25:46.821 "trsvcid": "4420", 00:25:46.821 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:46.821 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:46.821 "hdgst": false, 00:25:46.821 "ddgst": false 00:25:46.821 }, 00:25:46.821 "method": "bdev_nvme_attach_controller" 00:25:46.821 }' 00:25:46.821 [2024-04-15 18:12:35.753347] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:25:46.821 [2024-04-15 18:12:35.753447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3394711 ] 00:25:47.080 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.080 [2024-04-15 18:12:35.823666] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.080 [2024-04-15 18:12:35.910210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.980 Running I/O for 10 seconds... 00:25:48.980 18:12:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:48.980 18:12:37 -- common/autotest_common.sh@850 -- # return 0 00:25:48.980 18:12:37 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:48.980 18:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.980 18:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:48.980 18:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.980 18:12:37 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:48.980 18:12:37 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:48.980 18:12:37 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:48.980 18:12:37 -- target/shutdown.sh@57 -- # local ret=1 00:25:48.980 18:12:37 -- target/shutdown.sh@58 -- # local i 00:25:48.980 18:12:37 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:48.980 18:12:37 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:48.980 18:12:37 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:48.980 18:12:37 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:48.980 18:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.980 18:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:48.980 18:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.980 18:12:37 -- target/shutdown.sh@60 -- # read_io_count=67 00:25:48.980 18:12:37 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:25:48.980 18:12:37 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:49.238 18:12:38 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:49.238 18:12:38 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:49.238 18:12:38 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:49.238 18:12:38 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:49.238 18:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.238 18:12:38 -- common/autotest_common.sh@10 -- # set +x 00:25:49.496 18:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.496 18:12:38 -- target/shutdown.sh@60 -- # read_io_count=131 00:25:49.496 18:12:38 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:25:49.496 18:12:38 -- target/shutdown.sh@64 -- # ret=0 00:25:49.496 18:12:38 -- target/shutdown.sh@65 -- # break 00:25:49.496 18:12:38 -- target/shutdown.sh@69 -- # return 0 00:25:49.496 18:12:38 -- target/shutdown.sh@109 -- # killprocess 3394711 00:25:49.496 18:12:38 -- common/autotest_common.sh@936 -- # '[' -z 3394711 ']' 00:25:49.496 18:12:38 -- common/autotest_common.sh@940 -- # kill -0 3394711 00:25:49.496 18:12:38 -- common/autotest_common.sh@941 -- # uname 00:25:49.496 18:12:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:49.496 18:12:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3394711 00:25:49.496 18:12:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:49.496 18:12:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:49.496 18:12:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3394711' 00:25:49.496 killing process with pid 3394711 00:25:49.496 18:12:38 -- common/autotest_common.sh@955 -- # kill 3394711 00:25:49.496 18:12:38 -- common/autotest_common.sh@960 -- # wait 3394711 00:25:49.496 Received shutdown signal, test time was about 0.758966 seconds 00:25:49.496 00:25:49.496 Latency(us) 00:25:49.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.496 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.496 Verification LBA range: start 0x0 length 0x400 00:25:49.496 Nvme1n1 : 0.74 265.88 16.62 0.00 0.00 235570.93 6310.87 257872.02 00:25:49.496 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.496 Verification LBA range: start 0x0 length 0x400 00:25:49.496 Nvme2n1 : 0.71 180.80 11.30 0.00 0.00 339564.47 25826.04 245444.46 00:25:49.496 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.496 Verification LBA range: start 0x0 length 0x400 00:25:49.496 Nvme3n1 : 0.74 257.81 16.11 0.00 0.00 232313.49 18641.35 264085.81 00:25:49.496 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.496 Verification LBA range: start 0x0 length 0x400 00:25:49.496 Nvme4n1 : 0.74 259.66 16.23 0.00 0.00 224256.25 19515.16 240784.12 00:25:49.496 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.496 Verification LBA range: start 0x0 length 0x400 00:25:49.496 Nvme5n1 : 0.72 177.55 11.10 0.00 0.00 318556.92 21554.06 279620.27 00:25:49.496 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.496 Verification LBA range: start 0x0 length 0x400 00:25:49.496 Nvme6n1 : 0.75 256.08 16.00 0.00 0.00 214984.12 18835.53 256318.58 00:25:49.496 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.496 Verification LBA range: start 0x0 length 0x400 00:25:49.496 Nvme7n1 : 0.75 255.09 15.94 0.00 0.00 210779.65 20388.98 236123.78 00:25:49.496 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.496 Verification LBA range: start 0x0 length 0x400 00:25:49.496 Nvme8n1 : 0.76 253.24 15.83 0.00 0.00 206987.82 20583.16 262532.36 00:25:49.496 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.496 Verification LBA range: start 0x0 length 0x400 00:25:49.496 Nvme9n1 : 0.71 187.41 11.71 0.00 0.00 262018.80 3956.43 257872.02 00:25:49.496 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.496 Verification LBA range: start 0x0 length 0x400 00:25:49.496 Nvme10n1 : 0.73 175.72 10.98 0.00 0.00 277832.44 20777.34 292047.83 00:25:49.496 =================================================================================================================== 00:25:49.496 Total : 2269.26 141.83 0.00 0.00 245046.45 3956.43 292047.83 00:25:49.754 18:12:38 -- target/shutdown.sh@112 -- # sleep 1 00:25:50.685 18:12:39 -- target/shutdown.sh@113 -- # kill -0 3394403 00:25:50.685 18:12:39 -- target/shutdown.sh@115 -- # stoptarget 00:25:50.685 18:12:39 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:50.685 18:12:39 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:50.685 18:12:39 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:50.685 18:12:39 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:50.685 18:12:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:50.685 18:12:39 -- nvmf/common.sh@117 -- # sync 00:25:50.685 18:12:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:50.685 18:12:39 -- nvmf/common.sh@120 -- # set +e 00:25:50.685 18:12:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.685 18:12:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:50.685 rmmod nvme_tcp 00:25:50.944 rmmod nvme_fabrics 00:25:50.944 rmmod nvme_keyring 00:25:50.944 18:12:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.944 18:12:39 -- nvmf/common.sh@124 -- # set -e 00:25:50.944 18:12:39 -- nvmf/common.sh@125 -- # return 0 00:25:50.944 18:12:39 -- nvmf/common.sh@478 -- # '[' -n 3394403 ']' 00:25:50.944 18:12:39 -- nvmf/common.sh@479 -- # killprocess 3394403 00:25:50.944 18:12:39 -- common/autotest_common.sh@936 -- # '[' -z 3394403 ']' 00:25:50.944 18:12:39 -- common/autotest_common.sh@940 -- # kill -0 3394403 00:25:50.944 18:12:39 -- common/autotest_common.sh@941 -- # uname 00:25:50.944 18:12:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:50.944 18:12:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3394403 00:25:50.944 18:12:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:50.944 18:12:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:50.944 18:12:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3394403' 00:25:50.944 killing process with pid 3394403 00:25:50.944 18:12:39 -- common/autotest_common.sh@955 -- # kill 3394403 00:25:50.944 18:12:39 -- common/autotest_common.sh@960 -- # wait 3394403 00:25:51.510 18:12:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:51.510 18:12:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:51.510 18:12:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:51.510 18:12:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:51.510 18:12:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:51.510 18:12:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.510 18:12:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.510 18:12:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.409 18:12:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:53.409 00:25:53.409 real 0m8.463s 00:25:53.409 user 0m26.453s 00:25:53.409 sys 0m1.608s 00:25:53.409 18:12:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:53.409 18:12:42 -- common/autotest_common.sh@10 -- # set +x 00:25:53.409 ************************************ 00:25:53.409 END TEST nvmf_shutdown_tc2 00:25:53.409 ************************************ 00:25:53.409 18:12:42 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:53.409 18:12:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:53.409 18:12:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:53.409 18:12:42 -- common/autotest_common.sh@10 -- # set +x 00:25:53.668 ************************************ 00:25:53.668 START TEST nvmf_shutdown_tc3 00:25:53.668 ************************************ 00:25:53.668 18:12:42 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:25:53.668 18:12:42 -- target/shutdown.sh@120 -- # starttarget 00:25:53.668 18:12:42 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:53.668 18:12:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:53.668 18:12:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.668 18:12:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:53.668 18:12:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:53.668 18:12:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:53.668 18:12:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.668 18:12:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:53.668 18:12:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.668 18:12:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:53.668 18:12:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:53.668 18:12:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:53.668 18:12:42 -- common/autotest_common.sh@10 -- # set +x 00:25:53.668 18:12:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:53.668 18:12:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:53.668 18:12:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:53.668 18:12:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:53.668 18:12:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:53.668 18:12:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:53.668 18:12:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:53.668 18:12:42 -- nvmf/common.sh@295 -- # net_devs=() 00:25:53.668 18:12:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:53.668 18:12:42 -- nvmf/common.sh@296 -- # e810=() 00:25:53.668 18:12:42 -- nvmf/common.sh@296 -- # local -ga e810 00:25:53.668 18:12:42 -- nvmf/common.sh@297 -- # x722=() 00:25:53.668 18:12:42 -- nvmf/common.sh@297 -- # local -ga x722 00:25:53.668 18:12:42 -- nvmf/common.sh@298 -- # mlx=() 00:25:53.668 18:12:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:53.668 18:12:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.668 18:12:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.668 18:12:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.668 18:12:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.668 18:12:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.668 18:12:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.668 18:12:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.668 18:12:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.668 18:12:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.668 18:12:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.668 18:12:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.668 18:12:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:53.668 18:12:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:53.668 18:12:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:53.668 18:12:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:53.668 18:12:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:53.668 18:12:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:53.668 18:12:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:53.668 18:12:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:53.668 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:53.668 18:12:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:53.668 18:12:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:53.668 18:12:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.668 18:12:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.668 18:12:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:53.668 18:12:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:53.668 18:12:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:53.668 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:53.668 18:12:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:53.668 18:12:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:53.668 18:12:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.668 18:12:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.668 18:12:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:53.668 18:12:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:53.668 18:12:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:53.668 18:12:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:53.668 18:12:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:53.668 18:12:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.668 18:12:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:53.668 18:12:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.668 18:12:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:53.668 Found net devices under 0000:84:00.0: cvl_0_0 00:25:53.668 18:12:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.668 18:12:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:53.668 18:12:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.668 18:12:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:53.668 18:12:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.668 18:12:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:53.668 Found net devices under 0000:84:00.1: cvl_0_1 00:25:53.668 18:12:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.668 18:12:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:53.668 18:12:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:53.668 18:12:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:53.668 18:12:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:53.668 18:12:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:53.668 18:12:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.668 18:12:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.668 18:12:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.668 18:12:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:53.668 18:12:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.668 18:12:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.668 18:12:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:53.669 18:12:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.669 18:12:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.669 18:12:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:53.669 18:12:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:53.669 18:12:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.669 18:12:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.669 18:12:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.669 18:12:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.669 18:12:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:53.669 18:12:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.669 18:12:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.669 18:12:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.669 18:12:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:53.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:25:53.669 00:25:53.669 --- 10.0.0.2 ping statistics --- 00:25:53.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.669 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:25:53.669 18:12:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:25:53.669 00:25:53.669 --- 10.0.0.1 ping statistics --- 00:25:53.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.669 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:25:53.669 18:12:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.669 18:12:42 -- nvmf/common.sh@411 -- # return 0 00:25:53.669 18:12:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:53.669 18:12:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.669 18:12:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:53.669 18:12:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:53.669 18:12:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.669 18:12:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:53.669 18:12:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:53.669 18:12:42 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:53.669 18:12:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:53.669 18:12:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:53.669 18:12:42 -- common/autotest_common.sh@10 -- # set +x 00:25:53.669 18:12:42 -- nvmf/common.sh@470 -- # nvmfpid=3395615 00:25:53.669 18:12:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:53.669 18:12:42 -- nvmf/common.sh@471 -- # waitforlisten 3395615 00:25:53.669 18:12:42 -- common/autotest_common.sh@817 -- # '[' -z 3395615 ']' 00:25:53.669 18:12:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.669 18:12:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:53.669 18:12:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.669 18:12:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:53.669 18:12:42 -- common/autotest_common.sh@10 -- # set +x 00:25:53.926 [2024-04-15 18:12:42.630569] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:25:53.926 [2024-04-15 18:12:42.630661] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.926 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.926 [2024-04-15 18:12:42.708778] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:53.926 [2024-04-15 18:12:42.806069] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.927 [2024-04-15 18:12:42.806136] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.927 [2024-04-15 18:12:42.806153] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.927 [2024-04-15 18:12:42.806168] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.927 [2024-04-15 18:12:42.806180] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.927 [2024-04-15 18:12:42.806278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:53.927 [2024-04-15 18:12:42.806332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:53.927 [2024-04-15 18:12:42.806386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:53.927 [2024-04-15 18:12:42.806388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.185 18:12:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:54.185 18:12:42 -- common/autotest_common.sh@850 -- # return 0 00:25:54.185 18:12:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:54.185 18:12:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:54.185 18:12:42 -- common/autotest_common.sh@10 -- # set +x 00:25:54.185 18:12:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.185 18:12:42 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:54.185 18:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:54.185 18:12:42 -- common/autotest_common.sh@10 -- # set +x 00:25:54.185 [2024-04-15 18:12:42.966069] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.185 18:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:54.185 18:12:42 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:54.185 18:12:42 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:54.185 18:12:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:54.185 18:12:42 -- common/autotest_common.sh@10 -- # set +x 00:25:54.185 18:12:42 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:54.185 18:12:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:54.185 18:12:42 -- target/shutdown.sh@28 -- # cat 00:25:54.185 18:12:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:54.185 18:12:42 -- target/shutdown.sh@28 -- # cat 00:25:54.185 18:12:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:54.185 18:12:42 -- target/shutdown.sh@28 -- # cat 00:25:54.185 18:12:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:54.185 18:12:42 -- target/shutdown.sh@28 -- # cat 00:25:54.185 18:12:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:54.185 18:12:42 -- target/shutdown.sh@28 -- # cat 00:25:54.185 18:12:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:54.185 18:12:42 -- target/shutdown.sh@28 -- # cat 00:25:54.185 18:12:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:54.185 18:12:42 -- target/shutdown.sh@28 -- # cat 00:25:54.185 18:12:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:54.185 18:12:42 -- target/shutdown.sh@28 -- # cat 00:25:54.185 18:12:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:54.185 18:12:43 -- target/shutdown.sh@28 -- # cat 00:25:54.185 18:12:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:54.185 18:12:43 -- target/shutdown.sh@28 -- # cat 00:25:54.185 18:12:43 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:54.185 18:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:54.185 18:12:43 -- common/autotest_common.sh@10 -- # set +x 00:25:54.185 Malloc1 00:25:54.185 [2024-04-15 18:12:43.052383] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.185 Malloc2 00:25:54.185 Malloc3 00:25:54.444 Malloc4 00:25:54.444 Malloc5 00:25:54.444 Malloc6 00:25:54.444 Malloc7 00:25:54.444 Malloc8 00:25:54.702 Malloc9 00:25:54.702 Malloc10 00:25:54.702 18:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:54.702 18:12:43 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:54.702 18:12:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:54.702 18:12:43 -- common/autotest_common.sh@10 -- # set +x 00:25:54.702 18:12:43 -- target/shutdown.sh@124 -- # perfpid=3395795 00:25:54.702 18:12:43 -- target/shutdown.sh@125 -- # waitforlisten 3395795 /var/tmp/bdevperf.sock 00:25:54.702 18:12:43 -- common/autotest_common.sh@817 -- # '[' -z 3395795 ']' 00:25:54.702 18:12:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:54.702 18:12:43 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:54.702 18:12:43 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:54.702 18:12:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:54.702 18:12:43 -- nvmf/common.sh@521 -- # config=() 00:25:54.702 18:12:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:54.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:54.702 18:12:43 -- nvmf/common.sh@521 -- # local subsystem config 00:25:54.702 18:12:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:54.702 18:12:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:54.702 18:12:43 -- common/autotest_common.sh@10 -- # set +x 00:25:54.702 18:12:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:54.702 { 00:25:54.702 "params": { 00:25:54.702 "name": "Nvme$subsystem", 00:25:54.702 "trtype": "$TEST_TRANSPORT", 00:25:54.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.702 "adrfam": "ipv4", 00:25:54.702 "trsvcid": "$NVMF_PORT", 00:25:54.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.702 "hdgst": ${hdgst:-false}, 00:25:54.702 "ddgst": ${ddgst:-false} 00:25:54.702 }, 00:25:54.702 "method": "bdev_nvme_attach_controller" 00:25:54.702 } 00:25:54.702 EOF 00:25:54.702 )") 00:25:54.702 18:12:43 -- nvmf/common.sh@543 -- # cat 00:25:54.702 18:12:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:54.702 18:12:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:54.702 { 00:25:54.702 "params": { 00:25:54.702 "name": "Nvme$subsystem", 00:25:54.702 "trtype": "$TEST_TRANSPORT", 00:25:54.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.702 "adrfam": "ipv4", 00:25:54.702 "trsvcid": "$NVMF_PORT", 00:25:54.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.702 "hdgst": ${hdgst:-false}, 00:25:54.702 "ddgst": ${ddgst:-false} 00:25:54.702 }, 00:25:54.702 "method": "bdev_nvme_attach_controller" 00:25:54.702 } 00:25:54.702 EOF 00:25:54.702 )") 00:25:54.702 18:12:43 -- nvmf/common.sh@543 -- # cat 00:25:54.702 18:12:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:54.702 18:12:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:54.702 { 00:25:54.702 "params": { 00:25:54.702 "name": "Nvme$subsystem", 00:25:54.702 "trtype": "$TEST_TRANSPORT", 00:25:54.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.702 "adrfam": "ipv4", 00:25:54.702 "trsvcid": "$NVMF_PORT", 00:25:54.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.702 "hdgst": ${hdgst:-false}, 00:25:54.702 "ddgst": ${ddgst:-false} 00:25:54.702 }, 00:25:54.702 "method": "bdev_nvme_attach_controller" 00:25:54.702 } 00:25:54.702 EOF 00:25:54.702 )") 00:25:54.702 18:12:43 -- nvmf/common.sh@543 -- # cat 00:25:54.702 18:12:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:54.702 18:12:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:54.702 { 00:25:54.702 "params": { 00:25:54.702 "name": "Nvme$subsystem", 00:25:54.702 "trtype": "$TEST_TRANSPORT", 00:25:54.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.702 "adrfam": "ipv4", 00:25:54.702 "trsvcid": "$NVMF_PORT", 00:25:54.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.702 "hdgst": ${hdgst:-false}, 00:25:54.702 "ddgst": ${ddgst:-false} 00:25:54.702 }, 00:25:54.702 "method": "bdev_nvme_attach_controller" 00:25:54.702 } 00:25:54.702 EOF 00:25:54.702 )") 00:25:54.702 18:12:43 -- nvmf/common.sh@543 -- # cat 00:25:54.702 18:12:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:54.702 18:12:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:54.702 { 00:25:54.702 "params": { 00:25:54.702 "name": "Nvme$subsystem", 00:25:54.702 "trtype": "$TEST_TRANSPORT", 00:25:54.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.702 "adrfam": "ipv4", 00:25:54.702 "trsvcid": "$NVMF_PORT", 00:25:54.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.702 "hdgst": ${hdgst:-false}, 00:25:54.702 "ddgst": ${ddgst:-false} 00:25:54.702 }, 00:25:54.702 "method": "bdev_nvme_attach_controller" 00:25:54.702 } 00:25:54.702 EOF 00:25:54.702 )") 00:25:54.702 18:12:43 -- nvmf/common.sh@543 -- # cat 00:25:54.702 18:12:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:54.702 18:12:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:54.702 { 00:25:54.702 "params": { 00:25:54.702 "name": "Nvme$subsystem", 00:25:54.702 "trtype": "$TEST_TRANSPORT", 00:25:54.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.702 "adrfam": "ipv4", 00:25:54.702 "trsvcid": "$NVMF_PORT", 00:25:54.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.702 "hdgst": ${hdgst:-false}, 00:25:54.702 "ddgst": ${ddgst:-false} 00:25:54.702 }, 00:25:54.703 "method": "bdev_nvme_attach_controller" 00:25:54.703 } 00:25:54.703 EOF 00:25:54.703 )") 00:25:54.703 18:12:43 -- nvmf/common.sh@543 -- # cat 00:25:54.703 18:12:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:54.703 18:12:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:54.703 { 00:25:54.703 "params": { 00:25:54.703 "name": "Nvme$subsystem", 00:25:54.703 "trtype": "$TEST_TRANSPORT", 00:25:54.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.703 "adrfam": "ipv4", 00:25:54.703 "trsvcid": "$NVMF_PORT", 00:25:54.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.703 "hdgst": ${hdgst:-false}, 00:25:54.703 "ddgst": ${ddgst:-false} 00:25:54.703 }, 00:25:54.703 "method": "bdev_nvme_attach_controller" 00:25:54.703 } 00:25:54.703 EOF 00:25:54.703 )") 00:25:54.703 18:12:43 -- nvmf/common.sh@543 -- # cat 00:25:54.703 18:12:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:54.703 18:12:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:54.703 { 00:25:54.703 "params": { 00:25:54.703 "name": "Nvme$subsystem", 00:25:54.703 "trtype": "$TEST_TRANSPORT", 00:25:54.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.703 "adrfam": "ipv4", 00:25:54.703 "trsvcid": "$NVMF_PORT", 00:25:54.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.703 "hdgst": ${hdgst:-false}, 00:25:54.703 "ddgst": ${ddgst:-false} 00:25:54.703 }, 00:25:54.703 "method": "bdev_nvme_attach_controller" 00:25:54.703 } 00:25:54.703 EOF 00:25:54.703 )") 00:25:54.703 18:12:43 -- nvmf/common.sh@543 -- # cat 00:25:54.703 18:12:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:54.703 18:12:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:54.703 { 00:25:54.703 "params": { 00:25:54.703 "name": "Nvme$subsystem", 00:25:54.703 "trtype": "$TEST_TRANSPORT", 00:25:54.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.703 "adrfam": "ipv4", 00:25:54.703 "trsvcid": "$NVMF_PORT", 00:25:54.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.703 "hdgst": ${hdgst:-false}, 00:25:54.703 "ddgst": ${ddgst:-false} 00:25:54.703 }, 00:25:54.703 "method": "bdev_nvme_attach_controller" 00:25:54.703 } 00:25:54.703 EOF 00:25:54.703 )") 00:25:54.703 18:12:43 -- nvmf/common.sh@543 -- # cat 00:25:54.703 18:12:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:54.703 18:12:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:54.703 { 00:25:54.703 "params": { 00:25:54.703 "name": "Nvme$subsystem", 00:25:54.703 "trtype": "$TEST_TRANSPORT", 00:25:54.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.703 "adrfam": "ipv4", 00:25:54.703 "trsvcid": "$NVMF_PORT", 00:25:54.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.703 "hdgst": ${hdgst:-false}, 00:25:54.703 "ddgst": ${ddgst:-false} 00:25:54.703 }, 00:25:54.703 "method": "bdev_nvme_attach_controller" 00:25:54.703 } 00:25:54.703 EOF 00:25:54.703 )") 00:25:54.703 18:12:43 -- nvmf/common.sh@543 -- # cat 00:25:54.703 18:12:43 -- nvmf/common.sh@545 -- # jq . 00:25:54.703 18:12:43 -- nvmf/common.sh@546 -- # IFS=, 00:25:54.703 18:12:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:54.703 "params": { 00:25:54.703 "name": "Nvme1", 00:25:54.703 "trtype": "tcp", 00:25:54.703 "traddr": "10.0.0.2", 00:25:54.703 "adrfam": "ipv4", 00:25:54.703 "trsvcid": "4420", 00:25:54.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:54.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:54.703 "hdgst": false, 00:25:54.703 "ddgst": false 00:25:54.703 }, 00:25:54.703 "method": "bdev_nvme_attach_controller" 00:25:54.703 },{ 00:25:54.703 "params": { 00:25:54.703 "name": "Nvme2", 00:25:54.703 "trtype": "tcp", 00:25:54.703 "traddr": "10.0.0.2", 00:25:54.703 "adrfam": "ipv4", 00:25:54.703 "trsvcid": "4420", 00:25:54.703 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:54.703 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:54.703 "hdgst": false, 00:25:54.703 "ddgst": false 00:25:54.703 }, 00:25:54.703 "method": "bdev_nvme_attach_controller" 00:25:54.703 },{ 00:25:54.703 "params": { 00:25:54.703 "name": "Nvme3", 00:25:54.703 "trtype": "tcp", 00:25:54.703 "traddr": "10.0.0.2", 00:25:54.703 "adrfam": "ipv4", 00:25:54.703 "trsvcid": "4420", 00:25:54.703 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:54.703 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:54.703 "hdgst": false, 00:25:54.703 "ddgst": false 00:25:54.703 }, 00:25:54.703 "method": "bdev_nvme_attach_controller" 00:25:54.703 },{ 00:25:54.703 "params": { 00:25:54.703 "name": "Nvme4", 00:25:54.703 "trtype": "tcp", 00:25:54.703 "traddr": "10.0.0.2", 00:25:54.703 "adrfam": "ipv4", 00:25:54.703 "trsvcid": "4420", 00:25:54.703 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:54.703 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:54.703 "hdgst": false, 00:25:54.703 "ddgst": false 00:25:54.703 }, 00:25:54.703 "method": "bdev_nvme_attach_controller" 00:25:54.703 },{ 00:25:54.703 "params": { 00:25:54.703 "name": "Nvme5", 00:25:54.703 "trtype": "tcp", 00:25:54.703 "traddr": "10.0.0.2", 00:25:54.703 "adrfam": "ipv4", 00:25:54.703 "trsvcid": "4420", 00:25:54.703 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:54.703 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:54.703 "hdgst": false, 00:25:54.703 "ddgst": false 00:25:54.703 }, 00:25:54.703 "method": "bdev_nvme_attach_controller" 00:25:54.703 },{ 00:25:54.703 "params": { 00:25:54.703 "name": "Nvme6", 00:25:54.703 "trtype": "tcp", 00:25:54.703 "traddr": "10.0.0.2", 00:25:54.703 "adrfam": "ipv4", 00:25:54.703 "trsvcid": "4420", 00:25:54.703 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:54.703 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:54.703 "hdgst": false, 00:25:54.703 "ddgst": false 00:25:54.703 }, 00:25:54.703 "method": "bdev_nvme_attach_controller" 00:25:54.703 },{ 00:25:54.703 "params": { 00:25:54.703 "name": "Nvme7", 00:25:54.703 "trtype": "tcp", 00:25:54.703 "traddr": "10.0.0.2", 00:25:54.703 "adrfam": "ipv4", 00:25:54.703 "trsvcid": "4420", 00:25:54.703 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:54.703 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:54.703 "hdgst": false, 00:25:54.703 "ddgst": false 00:25:54.703 }, 00:25:54.703 "method": "bdev_nvme_attach_controller" 00:25:54.703 },{ 00:25:54.703 "params": { 00:25:54.703 "name": "Nvme8", 00:25:54.703 "trtype": "tcp", 00:25:54.703 "traddr": "10.0.0.2", 00:25:54.703 "adrfam": "ipv4", 00:25:54.703 "trsvcid": "4420", 00:25:54.703 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:54.703 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:54.703 "hdgst": false, 00:25:54.703 "ddgst": false 00:25:54.703 }, 00:25:54.703 "method": "bdev_nvme_attach_controller" 00:25:54.703 },{ 00:25:54.703 "params": { 00:25:54.703 "name": "Nvme9", 00:25:54.703 "trtype": "tcp", 00:25:54.703 "traddr": "10.0.0.2", 00:25:54.703 "adrfam": "ipv4", 00:25:54.703 "trsvcid": "4420", 00:25:54.703 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:54.703 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:54.703 "hdgst": false, 00:25:54.703 "ddgst": false 00:25:54.703 }, 00:25:54.703 "method": "bdev_nvme_attach_controller" 00:25:54.703 },{ 00:25:54.703 "params": { 00:25:54.703 "name": "Nvme10", 00:25:54.703 "trtype": "tcp", 00:25:54.703 "traddr": "10.0.0.2", 00:25:54.703 "adrfam": "ipv4", 00:25:54.703 "trsvcid": "4420", 00:25:54.703 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:54.703 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:54.703 "hdgst": false, 00:25:54.703 "ddgst": false 00:25:54.703 }, 00:25:54.703 "method": "bdev_nvme_attach_controller" 00:25:54.703 }' 00:25:54.703 [2024-04-15 18:12:43.591161] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:25:54.703 [2024-04-15 18:12:43.591247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3395795 ] 00:25:54.703 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.961 [2024-04-15 18:12:43.665878] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.961 [2024-04-15 18:12:43.757634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.863 Running I/O for 10 seconds... 00:25:57.122 18:12:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:57.122 18:12:45 -- common/autotest_common.sh@850 -- # return 0 00:25:57.122 18:12:45 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:57.122 18:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:57.122 18:12:45 -- common/autotest_common.sh@10 -- # set +x 00:25:57.122 18:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.122 18:12:45 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:57.122 18:12:45 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:57.122 18:12:45 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:57.122 18:12:45 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:57.122 18:12:45 -- target/shutdown.sh@57 -- # local ret=1 00:25:57.122 18:12:45 -- target/shutdown.sh@58 -- # local i 00:25:57.122 18:12:45 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:57.122 18:12:45 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:57.122 18:12:45 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:57.122 18:12:45 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:57.122 18:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:57.122 18:12:45 -- common/autotest_common.sh@10 -- # set +x 00:25:57.122 18:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.122 18:12:45 -- target/shutdown.sh@60 -- # read_io_count=3 00:25:57.122 18:12:45 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:25:57.122 18:12:45 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:57.381 18:12:46 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:57.381 18:12:46 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:57.381 18:12:46 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:57.381 18:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:57.381 18:12:46 -- common/autotest_common.sh@10 -- # set +x 00:25:57.381 18:12:46 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:57.381 18:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.381 18:12:46 -- target/shutdown.sh@60 -- # read_io_count=72 00:25:57.381 18:12:46 -- target/shutdown.sh@63 -- # '[' 72 -ge 100 ']' 00:25:57.381 18:12:46 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:57.649 18:12:46 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:57.649 18:12:46 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:57.649 18:12:46 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:57.649 18:12:46 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:57.649 18:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:57.649 18:12:46 -- common/autotest_common.sh@10 -- # set +x 00:25:57.649 18:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.649 18:12:46 -- target/shutdown.sh@60 -- # read_io_count=147 00:25:57.649 18:12:46 -- target/shutdown.sh@63 -- # '[' 147 -ge 100 ']' 00:25:57.649 18:12:46 -- target/shutdown.sh@64 -- # ret=0 00:25:57.649 18:12:46 -- target/shutdown.sh@65 -- # break 00:25:57.649 18:12:46 -- target/shutdown.sh@69 -- # return 0 00:25:57.649 18:12:46 -- target/shutdown.sh@134 -- # killprocess 3395615 00:25:57.649 18:12:46 -- common/autotest_common.sh@936 -- # '[' -z 3395615 ']' 00:25:57.649 18:12:46 -- common/autotest_common.sh@940 -- # kill -0 3395615 00:25:57.649 18:12:46 -- common/autotest_common.sh@941 -- # uname 00:25:57.649 18:12:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:57.649 18:12:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3395615 00:25:57.649 18:12:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:57.649 18:12:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:57.649 18:12:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3395615' 00:25:57.649 killing process with pid 3395615 00:25:57.649 18:12:46 -- common/autotest_common.sh@955 -- # kill 3395615 00:25:57.649 18:12:46 -- common/autotest_common.sh@960 -- # wait 3395615 00:25:57.649 [2024-04-15 18:12:46.554146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554365] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.554503] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262180 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555405] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555572] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555586] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555638] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555705] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.555987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.556000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.556012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.556025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.556038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.556051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.556086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.556103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.556116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.556129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.556143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.556156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.556169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.556183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.556196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.556209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.556221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.556234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.556247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264ad0 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.557493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.557519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.557534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.557547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.557560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.557578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.557592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.557620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.557633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.557646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.557660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.557673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.649 [2024-04-15 18:12:46.557685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.557989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558164] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.558398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262610 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.559954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.559988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560222] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560417] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560543] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560688] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560813] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.560825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262aa0 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.561880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.561918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.561933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.561946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.561959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.561972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.561985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.561997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562099] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562401] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562439] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562508] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.650 [2024-04-15 18:12:46.562575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.562588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.562602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.562614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.562627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.562640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.562653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.562666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.562678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.562691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.562704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.562717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.562730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.562742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.562754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.562767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f50 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563572] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563589] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563615] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563790] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.563994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.564340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12633e0 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565348] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565401] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565464] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565477] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565503] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565516] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565541] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565664] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565956] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.565994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.566015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.566028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.566040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.566053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.566090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.566104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.566117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.566130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.566142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.566155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.566168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.566180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263890 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.568451] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.568478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.568492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.568505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.651 [2024-04-15 18:12:46.568518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568606] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568619] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568942] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.568994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.569308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264640 is same with the state(5) to be set 00:25:57.652 [2024-04-15 18:12:46.571860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.571908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.571940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.571957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.571974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.571989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.572977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.572993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.573008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.573023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.573055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.573300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.573318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.573334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.573349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.573366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.573381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.573398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.573413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.573430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.573445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.573462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.573481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.573499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.573515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.573531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.573547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.573564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.573578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.573595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.573610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.573627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.573641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.573658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.573673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.573690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.573705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.573722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-04-15 18:12:46.573736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.652 [2024-04-15 18:12:46.573753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.573768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.573784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.573799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.573816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.573831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.573847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.573862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.573883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.573898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.573915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.573930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.573947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.573962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.573979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.573995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.574011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.574026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.574043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.574066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.574087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.574102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.574119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.574133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.574150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.574165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.574182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.574196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.574212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.574227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.574274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:57.653 [2024-04-15 18:12:46.574361] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x218e400 was disconnected and freed. reset controller. 00:25:57.653 [2024-04-15 18:12:46.574626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.574655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.574671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.574685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.574700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.574715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.574730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.574744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.574758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2045170 is same with the state(5) to be set 00:25:57.653 [2024-04-15 18:12:46.574813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.574834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.574850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.574865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.574881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.574896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.574911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.574925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.574939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2072920 is same with the state(5) to be set 00:25:57.653 [2024-04-15 18:12:46.574988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e550 is same with the state(5) to be set 00:25:57.653 [2024-04-15 18:12:46.575179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205d990 is same with the state(5) to be set 00:25:57.653 [2024-04-15 18:12:46.575344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e6050 is same with the state(5) to be set 00:25:57.653 [2024-04-15 18:12:46.575513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203d9b0 is same with the state(5) to be set 00:25:57.653 [2024-04-15 18:12:46.575686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a2500 is same with the state(5) to be set 00:25:57.653 [2024-04-15 18:12:46.575859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.575970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.575984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2046590 is same with the state(5) to be set 00:25:57.653 [2024-04-15 18:12:46.576031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.576051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.576075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.576090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.576106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.576124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.576140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.576154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.576168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206a9a0 is same with the state(5) to be set 00:25:57.653 [2024-04-15 18:12:46.576217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.576238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.576254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.576268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.576283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.576297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.576311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.653 [2024-04-15 18:12:46.576325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.576339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0fd70 is same with the state(5) to be set 00:25:57.653 [2024-04-15 18:12:46.576857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.576881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.576904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.576921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.576938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.576954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.576971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.576987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.577004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.577018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.577035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.577049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.577074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.577095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.577113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.577128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.577144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.577159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.577175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.577190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.577207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.577222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.577239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.577254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.577271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-04-15 18:12:46.577286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.653 [2024-04-15 18:12:46.577303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.577981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.577995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.578938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.578953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cf70 is same with the state(5) to be set 00:25:57.654 [2024-04-15 18:12:46.579032] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x218cf70 was disconnected and freed. reset controller. 00:25:57.654 [2024-04-15 18:12:46.581965] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:57.654 [2024-04-15 18:12:46.582010] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:57.654 [2024-04-15 18:12:46.582042] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a2500 (9): Bad file descriptor 00:25:57.654 [2024-04-15 18:12:46.582074] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205d990 (9): Bad file descriptor 00:25:57.654 [2024-04-15 18:12:46.583639] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:57.654 [2024-04-15 18:12:46.583739] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:57.654 [2024-04-15 18:12:46.584001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-04-15 18:12:46.584178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-04-15 18:12:46.584208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205d990 with addr=10.0.0.2, port=4420 00:25:57.654 [2024-04-15 18:12:46.584228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205d990 is same with the state(5) to be set 00:25:57.654 [2024-04-15 18:12:46.584385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-04-15 18:12:46.584576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-04-15 18:12:46.584602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a2500 with addr=10.0.0.2, port=4420 00:25:57.654 [2024-04-15 18:12:46.584619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a2500 is same with the state(5) to be set 00:25:57.654 [2024-04-15 18:12:46.584685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.584710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.584740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.584757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.584775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.584791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.584807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.584823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.584840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.584854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.584871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.584886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.584902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.584917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.584935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.584950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.584966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.584981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.584998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.585013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.585029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.585044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.585081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.585098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.585116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.585131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.585147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.585162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.585179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.585194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.585211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-04-15 18:12:46.585226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.654 [2024-04-15 18:12:46.585242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.585983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.585998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.655 [2024-04-15 18:12:46.586777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.655 [2024-04-15 18:12:46.586794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0a090 is same with the state(5) to be set 00:25:57.655 [2024-04-15 18:12:46.586877] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c0a090 was disconnected and freed. reset controller. 00:25:57.655 [2024-04-15 18:12:46.586953] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:57.655 [2024-04-15 18:12:46.587030] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:57.655 [2024-04-15 18:12:46.587116] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:57.655 [2024-04-15 18:12:46.587193] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:57.655 [2024-04-15 18:12:46.587267] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:57.655 [2024-04-15 18:12:46.587375] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205d990 (9): Bad file descriptor 00:25:57.655 [2024-04-15 18:12:46.587404] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a2500 (9): Bad file descriptor 00:25:57.655 [2024-04-15 18:12:46.587457] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2045170 (9): Bad file descriptor 00:25:57.655 [2024-04-15 18:12:46.587495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2072920 (9): Bad file descriptor 00:25:57.655 [2024-04-15 18:12:46.587530] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e550 (9): Bad file descriptor 00:25:57.655 [2024-04-15 18:12:46.587563] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e6050 (9): Bad file descriptor 00:25:57.655 [2024-04-15 18:12:46.587593] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203d9b0 (9): Bad file descriptor 00:25:57.655 [2024-04-15 18:12:46.587625] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2046590 (9): Bad file descriptor 00:25:57.655 [2024-04-15 18:12:46.587657] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206a9a0 (9): Bad file descriptor 00:25:57.655 [2024-04-15 18:12:46.587688] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0fd70 (9): Bad file descriptor 00:25:57.655 [2024-04-15 18:12:46.588961] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.655 [2024-04-15 18:12:46.589007] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:57.655 [2024-04-15 18:12:46.589026] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:57.655 [2024-04-15 18:12:46.589045] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:57.655 [2024-04-15 18:12:46.589073] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:57.655 [2024-04-15 18:12:46.589090] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:57.655 [2024-04-15 18:12:46.589110] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:57.655 [2024-04-15 18:12:46.589193] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.655 [2024-04-15 18:12:46.589217] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.655 [2024-04-15 18:12:46.589414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-04-15 18:12:46.589612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-04-15 18:12:46.589649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0fd70 with addr=10.0.0.2, port=4420 00:25:57.655 [2024-04-15 18:12:46.589667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0fd70 is same with the state(5) to be set 00:25:57.655 [2024-04-15 18:12:46.590014] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0fd70 (9): Bad file descriptor 00:25:57.655 [2024-04-15 18:12:46.590103] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.655 [2024-04-15 18:12:46.590126] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.655 [2024-04-15 18:12:46.590141] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.655 [2024-04-15 18:12:46.590209] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.655 [2024-04-15 18:12:46.593053] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:57.655 [2024-04-15 18:12:46.593089] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:57.655 [2024-04-15 18:12:46.593340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-04-15 18:12:46.593508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-04-15 18:12:46.593535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a2500 with addr=10.0.0.2, port=4420 00:25:57.655 [2024-04-15 18:12:46.593553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a2500 is same with the state(5) to be set 00:25:57.928 [2024-04-15 18:12:46.593738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-04-15 18:12:46.593905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-04-15 18:12:46.593932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205d990 with addr=10.0.0.2, port=4420 00:25:57.928 [2024-04-15 18:12:46.593949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205d990 is same with the state(5) to be set 00:25:57.928 [2024-04-15 18:12:46.594008] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a2500 (9): Bad file descriptor 00:25:57.928 [2024-04-15 18:12:46.594033] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205d990 (9): Bad file descriptor 00:25:57.928 [2024-04-15 18:12:46.594097] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:57.928 [2024-04-15 18:12:46.594118] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:57.928 [2024-04-15 18:12:46.594133] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:57.928 [2024-04-15 18:12:46.594154] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:57.929 [2024-04-15 18:12:46.594170] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:57.929 [2024-04-15 18:12:46.594184] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:57.929 [2024-04-15 18:12:46.594244] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.929 [2024-04-15 18:12:46.594270] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.929 [2024-04-15 18:12:46.597606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.597638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.597673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.597690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.597720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.597736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.597752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.597767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.597786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.597801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.597817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.597832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.597849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.597864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.597881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.597896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.597912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.597927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.597944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.597959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.597975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.597990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.929 [2024-04-15 18:12:46.598745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.929 [2024-04-15 18:12:46.598762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.598776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.598793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.598807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.598824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.598839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.598856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.598875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.598893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.598908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.598924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.598940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.598957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.598971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.598988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.599711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.599726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b3d0 is same with the state(5) to be set 00:25:57.930 [2024-04-15 18:12:46.601020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.601044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.601071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.601090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.601107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.601122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.601139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.601154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.601171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.601186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.601203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.601218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.601234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.601249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.930 [2024-04-15 18:12:46.601265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.930 [2024-04-15 18:12:46.601280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.601975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.601990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.602007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.602022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.602039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.602054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.602078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.602094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.602111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.602126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.602143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.602158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.602174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.602193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.602210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.602226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.602243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.602258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.602275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.602290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.602307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.602322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.602339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.602354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.602371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.602385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.602401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.602416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.602432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.602447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.602463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.602478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.931 [2024-04-15 18:12:46.602495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.931 [2024-04-15 18:12:46.602509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.602526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.602540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.602556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.602571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.602595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.602611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.602627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.602642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.602658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.602673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.602689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.602703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.602720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.602734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.602750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.602765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.602782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.602797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.602813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.602828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.602844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.602859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.602875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.602890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.602906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.602920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.602937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.602951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.602967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.602986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.603003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.603019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.603035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.603050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.603074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.603089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.603105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218a950 is same with the state(5) to be set 00:25:57.932 [2024-04-15 18:12:46.604370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.604975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.932 [2024-04-15 18:12:46.604992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.932 [2024-04-15 18:12:46.605007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.605979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.605994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.933 [2024-04-15 18:12:46.606011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.933 [2024-04-15 18:12:46.606026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.606043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.606065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.606084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.606100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.606116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.606131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.606148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.606163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.606180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.606195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.606212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.606228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.606244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.606260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.606277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.606292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.606313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.606328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.606345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.606360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.606376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.606391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.606407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.606422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.606438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.606453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.606468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201a0d0 is same with the state(5) to be set 00:25:57.934 [2024-04-15 18:12:46.607701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.607725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.607747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.607763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.607779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.607794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.607812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.607827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.607844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.607858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.607874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.607889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.607906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.607920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.607942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.607959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.607977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.607992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.608009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.608023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.608040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.608055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.608080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.608096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.608112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.608127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.608143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.608158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.608175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.608190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.608207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.608222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.608238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.608253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.608270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.608284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.608301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.608316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.608333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.608352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.608370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.608385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.608401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.608416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.608433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.608448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.608465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.608479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.608496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.608510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.608527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.934 [2024-04-15 18:12:46.608542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.934 [2024-04-15 18:12:46.608558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.608573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.608589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.608604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.608621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.608636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.608652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.608667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.608683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.608698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.608714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.608728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.608749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.608765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.608781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.608795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.608812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.608828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.608845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.608860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.608877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.608892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.608909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.608924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.608941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.608956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.608973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.608988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.935 [2024-04-15 18:12:46.609723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.935 [2024-04-15 18:12:46.609738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.609755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.609770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.609785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b530 is same with the state(5) to be set 00:25:57.936 [2024-04-15 18:12:46.611023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.611976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.611993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.612008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.936 [2024-04-15 18:12:46.612028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.936 [2024-04-15 18:12:46.612043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.612982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.612998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.613012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.613029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.613044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.613066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.613083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.613100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.613116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.613132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bae0 is same with the state(5) to be set 00:25:57.937 [2024-04-15 18:12:46.614386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.614409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.614432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.614449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.614467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.614482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.614504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.614519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.614536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.614551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.937 [2024-04-15 18:12:46.614567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.937 [2024-04-15 18:12:46.614582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.614599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.614614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.614630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.614644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.614661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.614676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.614693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.614709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.614725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.614740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.614756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.614771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.614788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.614803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.614820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.614834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.614851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.614866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.614884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.614900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.614921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.614937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.614953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.614968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.614985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.938 [2024-04-15 18:12:46.615602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-04-15 18:12:46.615617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.615634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.615649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.615665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.615680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.615697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.615712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.615733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.615749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.615765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.615780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.615797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.615812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.615828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.615843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.615860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.615874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.615892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.615907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.615923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.615938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.615955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.615971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.615987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.616001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.616018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.616033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.616050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.616079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.616099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.616114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.616130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.616150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.616167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.616181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.616198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.616213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.616230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.616245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.616262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.616277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.616293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.616308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.616324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.616339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.616356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.616370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.616387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.616401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.616418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.616433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.616449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.616464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.616479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2106170 is same with the state(5) to be set 00:25:57.939 [2024-04-15 18:12:46.618785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.618811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.618834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.618856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.618874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.618890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.618907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.618923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.618940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.618955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.939 [2024-04-15 18:12:46.618973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.939 [2024-04-15 18:12:46.618988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.619977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.619995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.620011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.620027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.620042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.620073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.620092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.620109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.620125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.620141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.620156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.620173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.620188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.620204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.620219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.620236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.620250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.940 [2024-04-15 18:12:46.620267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.940 [2024-04-15 18:12:46.620282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.941 [2024-04-15 18:12:46.620313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.941 [2024-04-15 18:12:46.620344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.941 [2024-04-15 18:12:46.620376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.941 [2024-04-15 18:12:46.620408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.941 [2024-04-15 18:12:46.620439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.941 [2024-04-15 18:12:46.620476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.941 [2024-04-15 18:12:46.620509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.941 [2024-04-15 18:12:46.620541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.941 [2024-04-15 18:12:46.620572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.941 [2024-04-15 18:12:46.620604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.941 [2024-04-15 18:12:46.620635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.941 [2024-04-15 18:12:46.620667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.941 [2024-04-15 18:12:46.620698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.941 [2024-04-15 18:12:46.620730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.941 [2024-04-15 18:12:46.620762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.941 [2024-04-15 18:12:46.620793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.941 [2024-04-15 18:12:46.620825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.941 [2024-04-15 18:12:46.620857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.941 [2024-04-15 18:12:46.620877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21075c0 is same with the state(5) to be set 00:25:57.941 [2024-04-15 18:12:46.622542] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:57.941 [2024-04-15 18:12:46.622575] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:57.941 [2024-04-15 18:12:46.622596] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:57.941 [2024-04-15 18:12:46.622614] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:57.941 [2024-04-15 18:12:46.622741] bdev_nvme.c:2869:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:57.941 [2024-04-15 18:12:46.622767] bdev_nvme.c:2869:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:57.941 [2024-04-15 18:12:46.622794] bdev_nvme.c:2869:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:57.941 [2024-04-15 18:12:46.622904] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:57.941 [2024-04-15 18:12:46.622930] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:57.941 task offset: 25216 on job bdev=Nvme8n1 fails 00:25:57.941 00:25:57.941 Latency(us) 00:25:57.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.941 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:57.941 Job: Nvme1n1 ended in about 0.97 seconds with error 00:25:57.941 Verification LBA range: start 0x0 length 0x400 00:25:57.941 Nvme1n1 : 0.97 198.37 12.40 66.12 0.00 239373.84 19223.89 264085.81 00:25:57.941 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:57.941 Job: Nvme2n1 ended in about 0.98 seconds with error 00:25:57.941 Verification LBA range: start 0x0 length 0x400 00:25:57.941 Nvme2n1 : 0.98 130.62 8.16 65.31 0.00 317149.99 24758.04 285834.05 00:25:57.941 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:57.941 Job: Nvme3n1 ended in about 0.98 seconds with error 00:25:57.941 Verification LBA range: start 0x0 length 0x400 00:25:57.941 Nvme3n1 : 0.98 199.34 12.46 65.09 0.00 230477.79 18544.26 245444.46 00:25:57.941 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:57.941 Job: Nvme4n1 ended in about 0.99 seconds with error 00:25:57.941 Verification LBA range: start 0x0 length 0x400 00:25:57.941 Nvme4n1 : 0.99 194.60 12.16 64.87 0.00 230314.10 18641.35 262532.36 00:25:57.941 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:57.941 Job: Nvme5n1 ended in about 0.99 seconds with error 00:25:57.941 Verification LBA range: start 0x0 length 0x400 00:25:57.941 Nvme5n1 : 0.99 134.35 8.40 64.65 0.00 294363.22 20291.89 292047.83 00:25:57.941 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:57.941 Job: Nvme6n1 ended in about 0.99 seconds with error 00:25:57.941 Verification LBA range: start 0x0 length 0x400 00:25:57.941 Nvme6n1 : 0.99 128.87 8.05 64.43 0.00 296948.12 23107.51 265639.25 00:25:57.941 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:57.941 Job: Nvme7n1 ended in about 0.96 seconds with error 00:25:57.941 Verification LBA range: start 0x0 length 0x400 00:25:57.941 Nvme7n1 : 0.96 199.81 12.49 66.60 0.00 209922.65 10097.40 265639.25 00:25:57.941 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:57.941 Job: Nvme8n1 ended in about 0.96 seconds with error 00:25:57.941 Verification LBA range: start 0x0 length 0x400 00:25:57.942 Nvme8n1 : 0.96 200.16 12.51 66.72 0.00 204966.31 22233.69 257872.02 00:25:57.942 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:57.942 Job: Nvme9n1 ended in about 1.00 seconds with error 00:25:57.942 Verification LBA range: start 0x0 length 0x400 00:25:57.942 Nvme9n1 : 1.00 128.44 8.03 64.22 0.00 279834.42 20583.16 301368.51 00:25:57.942 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:57.942 Job: Nvme10n1 ended in about 1.00 seconds with error 00:25:57.942 Verification LBA range: start 0x0 length 0x400 00:25:57.942 Nvme10n1 : 1.00 127.87 7.99 63.94 0.00 275326.04 21165.70 262532.36 00:25:57.942 =================================================================================================================== 00:25:57.942 Total : 1642.43 102.65 651.95 0.00 252940.47 10097.40 301368.51 00:25:57.942 [2024-04-15 18:12:46.652013] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:57.942 [2024-04-15 18:12:46.652112] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:57.942 [2024-04-15 18:12:46.652524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.652731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.652759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21e6050 with addr=10.0.0.2, port=4420 00:25:57.942 [2024-04-15 18:12:46.652781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e6050 is same with the state(5) to be set 00:25:57.942 [2024-04-15 18:12:46.652965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.653134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.653182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203d9b0 with addr=10.0.0.2, port=4420 00:25:57.942 [2024-04-15 18:12:46.653200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203d9b0 is same with the state(5) to be set 00:25:57.942 [2024-04-15 18:12:46.653393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.653561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.653588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2046590 with addr=10.0.0.2, port=4420 00:25:57.942 [2024-04-15 18:12:46.653605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2046590 is same with the state(5) to be set 00:25:57.942 [2024-04-15 18:12:46.653740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.653885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.653911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e550 with addr=10.0.0.2, port=4420 00:25:57.942 [2024-04-15 18:12:46.653928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e550 is same with the state(5) to be set 00:25:57.942 [2024-04-15 18:12:46.655835] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.942 [2024-04-15 18:12:46.655868] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:57.942 [2024-04-15 18:12:46.656124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.656298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.656326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206a9a0 with addr=10.0.0.2, port=4420 00:25:57.942 [2024-04-15 18:12:46.656344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206a9a0 is same with the state(5) to be set 00:25:57.942 [2024-04-15 18:12:46.656467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.656657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.656682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2045170 with addr=10.0.0.2, port=4420 00:25:57.942 [2024-04-15 18:12:46.656709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2045170 is same with the state(5) to be set 00:25:57.942 [2024-04-15 18:12:46.656836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.657027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.657053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2072920 with addr=10.0.0.2, port=4420 00:25:57.942 [2024-04-15 18:12:46.657077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2072920 is same with the state(5) to be set 00:25:57.942 [2024-04-15 18:12:46.657104] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e6050 (9): Bad file descriptor 00:25:57.942 [2024-04-15 18:12:46.657130] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203d9b0 (9): Bad file descriptor 00:25:57.942 [2024-04-15 18:12:46.657148] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2046590 (9): Bad file descriptor 00:25:57.942 [2024-04-15 18:12:46.657166] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e550 (9): Bad file descriptor 00:25:57.942 [2024-04-15 18:12:46.657219] bdev_nvme.c:2869:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:57.942 [2024-04-15 18:12:46.657249] bdev_nvme.c:2869:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:57.942 [2024-04-15 18:12:46.657270] bdev_nvme.c:2869:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:57.942 [2024-04-15 18:12:46.657290] bdev_nvme.c:2869:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:57.942 [2024-04-15 18:12:46.657309] bdev_nvme.c:2869:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:57.942 [2024-04-15 18:12:46.657397] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:57.942 [2024-04-15 18:12:46.657616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.657812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.657838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0fd70 with addr=10.0.0.2, port=4420 00:25:57.942 [2024-04-15 18:12:46.657855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0fd70 is same with the state(5) to be set 00:25:57.942 [2024-04-15 18:12:46.658037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.658232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.658259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205d990 with addr=10.0.0.2, port=4420 00:25:57.942 [2024-04-15 18:12:46.658276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205d990 is same with the state(5) to be set 00:25:57.942 [2024-04-15 18:12:46.658295] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206a9a0 (9): Bad file descriptor 00:25:57.942 [2024-04-15 18:12:46.658315] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2045170 (9): Bad file descriptor 00:25:57.942 [2024-04-15 18:12:46.658333] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2072920 (9): Bad file descriptor 00:25:57.942 [2024-04-15 18:12:46.658352] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:57.942 [2024-04-15 18:12:46.658367] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:57.942 [2024-04-15 18:12:46.658384] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:57.942 [2024-04-15 18:12:46.658405] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:57.942 [2024-04-15 18:12:46.658426] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:57.942 [2024-04-15 18:12:46.658440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:57.942 [2024-04-15 18:12:46.658458] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:57.942 [2024-04-15 18:12:46.658473] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:57.942 [2024-04-15 18:12:46.658486] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:57.942 [2024-04-15 18:12:46.658503] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:57.942 [2024-04-15 18:12:46.658518] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:57.942 [2024-04-15 18:12:46.658532] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:57.942 [2024-04-15 18:12:46.658636] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.942 [2024-04-15 18:12:46.658658] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.942 [2024-04-15 18:12:46.658671] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.942 [2024-04-15 18:12:46.658684] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.942 [2024-04-15 18:12:46.658867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.659055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.942 [2024-04-15 18:12:46.659142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a2500 with addr=10.0.0.2, port=4420 00:25:57.942 [2024-04-15 18:12:46.659160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a2500 is same with the state(5) to be set 00:25:57.942 [2024-04-15 18:12:46.659179] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0fd70 (9): Bad file descriptor 00:25:57.942 [2024-04-15 18:12:46.659198] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205d990 (9): Bad file descriptor 00:25:57.942 [2024-04-15 18:12:46.659215] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:57.942 [2024-04-15 18:12:46.659229] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:57.942 [2024-04-15 18:12:46.659242] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:57.942 [2024-04-15 18:12:46.659261] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:57.942 [2024-04-15 18:12:46.659276] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:57.942 [2024-04-15 18:12:46.659290] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:57.942 [2024-04-15 18:12:46.659306] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:57.942 [2024-04-15 18:12:46.659321] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:57.942 [2024-04-15 18:12:46.659334] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:57.942 [2024-04-15 18:12:46.659377] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.942 [2024-04-15 18:12:46.659396] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.942 [2024-04-15 18:12:46.659409] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.943 [2024-04-15 18:12:46.659426] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a2500 (9): Bad file descriptor 00:25:57.943 [2024-04-15 18:12:46.659465] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.943 [2024-04-15 18:12:46.659481] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.943 [2024-04-15 18:12:46.659494] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.943 [2024-04-15 18:12:46.660459] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:57.943 [2024-04-15 18:12:46.660485] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:57.943 [2024-04-15 18:12:46.660500] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:57.943 [2024-04-15 18:12:46.660548] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.943 [2024-04-15 18:12:46.660578] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.943 [2024-04-15 18:12:46.660591] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:57.943 [2024-04-15 18:12:46.660604] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:57.943 [2024-04-15 18:12:46.660618] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:57.943 [2024-04-15 18:12:46.660673] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.224 18:12:47 -- target/shutdown.sh@135 -- # nvmfpid= 00:25:58.224 18:12:47 -- target/shutdown.sh@138 -- # sleep 1 00:25:59.603 18:12:48 -- target/shutdown.sh@141 -- # kill -9 3395795 00:25:59.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (3395795) - No such process 00:25:59.603 18:12:48 -- target/shutdown.sh@141 -- # true 00:25:59.603 18:12:48 -- target/shutdown.sh@143 -- # stoptarget 00:25:59.603 18:12:48 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:59.603 18:12:48 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:59.603 18:12:48 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:59.603 18:12:48 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:59.603 18:12:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:59.603 18:12:48 -- nvmf/common.sh@117 -- # sync 00:25:59.603 18:12:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:59.603 18:12:48 -- nvmf/common.sh@120 -- # set +e 00:25:59.603 18:12:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:59.603 18:12:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:59.603 rmmod nvme_tcp 00:25:59.603 rmmod nvme_fabrics 00:25:59.603 rmmod nvme_keyring 00:25:59.603 18:12:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:59.603 18:12:48 -- nvmf/common.sh@124 -- # set -e 00:25:59.603 18:12:48 -- nvmf/common.sh@125 -- # return 0 00:25:59.603 18:12:48 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:25:59.603 18:12:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:59.603 18:12:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:59.603 18:12:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:59.603 18:12:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:59.603 18:12:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:59.603 18:12:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.603 18:12:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:59.603 18:12:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.515 18:12:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:01.515 00:26:01.515 real 0m7.850s 00:26:01.515 user 0m20.178s 00:26:01.515 sys 0m1.620s 00:26:01.515 18:12:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:01.515 18:12:50 -- common/autotest_common.sh@10 -- # set +x 00:26:01.515 ************************************ 00:26:01.515 END TEST nvmf_shutdown_tc3 00:26:01.515 ************************************ 00:26:01.515 18:12:50 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:26:01.515 00:26:01.515 real 0m29.116s 00:26:01.515 user 1m22.269s 00:26:01.515 sys 0m7.105s 00:26:01.515 18:12:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:01.515 18:12:50 -- common/autotest_common.sh@10 -- # set +x 00:26:01.515 ************************************ 00:26:01.515 END TEST nvmf_shutdown 00:26:01.515 ************************************ 00:26:01.515 18:12:50 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:26:01.515 18:12:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:01.515 18:12:50 -- common/autotest_common.sh@10 -- # set +x 00:26:01.515 18:12:50 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:26:01.515 18:12:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:01.515 18:12:50 -- common/autotest_common.sh@10 -- # set +x 00:26:01.515 18:12:50 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:26:01.515 18:12:50 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:01.515 18:12:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:01.515 18:12:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:01.515 18:12:50 -- common/autotest_common.sh@10 -- # set +x 00:26:01.515 ************************************ 00:26:01.515 START TEST nvmf_multicontroller 00:26:01.515 ************************************ 00:26:01.515 18:12:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:01.825 * Looking for test storage... 00:26:01.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:01.825 18:12:50 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:01.825 18:12:50 -- nvmf/common.sh@7 -- # uname -s 00:26:01.825 18:12:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.825 18:12:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.825 18:12:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.825 18:12:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.825 18:12:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.825 18:12:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.825 18:12:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.825 18:12:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.825 18:12:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.825 18:12:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.825 18:12:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:01.825 18:12:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:01.825 18:12:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.825 18:12:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.825 18:12:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:01.825 18:12:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:01.825 18:12:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:01.825 18:12:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.825 18:12:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.825 18:12:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.825 18:12:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.825 18:12:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.825 18:12:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.825 18:12:50 -- paths/export.sh@5 -- # export PATH 00:26:01.825 18:12:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.825 18:12:50 -- nvmf/common.sh@47 -- # : 0 00:26:01.825 18:12:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:01.825 18:12:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:01.825 18:12:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:01.825 18:12:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.825 18:12:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.825 18:12:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:01.825 18:12:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:01.825 18:12:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:01.825 18:12:50 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:01.825 18:12:50 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:01.825 18:12:50 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:01.825 18:12:50 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:01.825 18:12:50 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:01.825 18:12:50 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:01.825 18:12:50 -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:01.825 18:12:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:01.825 18:12:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.825 18:12:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:01.825 18:12:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:01.825 18:12:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:01.826 18:12:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.826 18:12:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:01.826 18:12:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.826 18:12:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:01.826 18:12:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:01.826 18:12:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:01.826 18:12:50 -- common/autotest_common.sh@10 -- # set +x 00:26:03.737 18:12:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:03.737 18:12:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:03.737 18:12:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:03.737 18:12:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:03.737 18:12:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:03.737 18:12:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:03.737 18:12:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:03.737 18:12:52 -- nvmf/common.sh@295 -- # net_devs=() 00:26:03.737 18:12:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:03.737 18:12:52 -- nvmf/common.sh@296 -- # e810=() 00:26:03.737 18:12:52 -- nvmf/common.sh@296 -- # local -ga e810 00:26:03.737 18:12:52 -- nvmf/common.sh@297 -- # x722=() 00:26:03.737 18:12:52 -- nvmf/common.sh@297 -- # local -ga x722 00:26:03.737 18:12:52 -- nvmf/common.sh@298 -- # mlx=() 00:26:03.737 18:12:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:03.737 18:12:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:03.737 18:12:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:03.737 18:12:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:03.737 18:12:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:03.737 18:12:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:03.737 18:12:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:03.737 18:12:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:03.737 18:12:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:03.737 18:12:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:03.737 18:12:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:03.737 18:12:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:03.737 18:12:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:03.737 18:12:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:03.737 18:12:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:03.738 18:12:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:03.738 18:12:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:03.738 18:12:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:03.738 18:12:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:03.738 18:12:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:03.738 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:03.738 18:12:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:03.738 18:12:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:03.738 18:12:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.738 18:12:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.738 18:12:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:03.738 18:12:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:03.738 18:12:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:03.738 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:03.738 18:12:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:03.738 18:12:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:03.738 18:12:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.738 18:12:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.738 18:12:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:03.738 18:12:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:03.738 18:12:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:03.738 18:12:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:03.738 18:12:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:03.738 18:12:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.738 18:12:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:03.738 18:12:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.738 18:12:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:03.738 Found net devices under 0000:84:00.0: cvl_0_0 00:26:03.738 18:12:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.738 18:12:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:03.738 18:12:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.738 18:12:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:03.738 18:12:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.738 18:12:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:03.738 Found net devices under 0000:84:00.1: cvl_0_1 00:26:03.738 18:12:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.738 18:12:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:03.738 18:12:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:03.738 18:12:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:03.738 18:12:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:03.738 18:12:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:03.738 18:12:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:03.738 18:12:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:03.738 18:12:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:03.738 18:12:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:03.738 18:12:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:03.738 18:12:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:03.738 18:12:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:03.738 18:12:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:03.738 18:12:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:03.738 18:12:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:03.738 18:12:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:03.738 18:12:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:03.738 18:12:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:03.996 18:12:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:03.996 18:12:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:03.996 18:12:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:03.996 18:12:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:03.996 18:12:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:03.996 18:12:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:03.996 18:12:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:03.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:26:03.996 00:26:03.996 --- 10.0.0.2 ping statistics --- 00:26:03.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.996 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:26:03.996 18:12:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:03.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:26:03.996 00:26:03.996 --- 10.0.0.1 ping statistics --- 00:26:03.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.996 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:26:03.996 18:12:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.996 18:12:52 -- nvmf/common.sh@411 -- # return 0 00:26:03.996 18:12:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:03.996 18:12:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.996 18:12:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:03.996 18:12:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:03.996 18:12:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.996 18:12:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:03.996 18:12:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:03.996 18:12:52 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:03.996 18:12:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:03.996 18:12:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:03.996 18:12:52 -- common/autotest_common.sh@10 -- # set +x 00:26:03.996 18:12:52 -- nvmf/common.sh@470 -- # nvmfpid=3398344 00:26:03.996 18:12:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:03.996 18:12:52 -- nvmf/common.sh@471 -- # waitforlisten 3398344 00:26:03.996 18:12:52 -- common/autotest_common.sh@817 -- # '[' -z 3398344 ']' 00:26:03.996 18:12:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.996 18:12:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:03.996 18:12:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.996 18:12:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:03.996 18:12:52 -- common/autotest_common.sh@10 -- # set +x 00:26:03.996 [2024-04-15 18:12:52.876363] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:26:03.996 [2024-04-15 18:12:52.876454] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.996 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.254 [2024-04-15 18:12:52.953033] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:04.254 [2024-04-15 18:12:53.046673] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:04.254 [2024-04-15 18:12:53.046740] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:04.254 [2024-04-15 18:12:53.046758] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:04.254 [2024-04-15 18:12:53.046772] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:04.254 [2024-04-15 18:12:53.046784] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:04.254 [2024-04-15 18:12:53.046875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:04.254 [2024-04-15 18:12:53.046927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:04.254 [2024-04-15 18:12:53.046930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.254 18:12:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:04.254 18:12:53 -- common/autotest_common.sh@850 -- # return 0 00:26:04.254 18:12:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:04.254 18:12:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:04.254 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:04.254 18:12:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.254 18:12:53 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:04.254 18:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.254 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:04.254 [2024-04-15 18:12:53.188394] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.254 18:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.254 18:12:53 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:04.254 18:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.254 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:04.512 Malloc0 00:26:04.512 18:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.512 18:12:53 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:04.512 18:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.512 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:04.512 18:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.512 18:12:53 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:04.512 18:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.512 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:04.512 18:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.512 18:12:53 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:04.512 18:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.512 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:04.512 [2024-04-15 18:12:53.249212] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.512 18:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.512 18:12:53 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:04.512 18:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.512 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:04.512 [2024-04-15 18:12:53.257117] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:04.512 18:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.512 18:12:53 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:04.512 18:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.512 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:04.512 Malloc1 00:26:04.512 18:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.512 18:12:53 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:04.512 18:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.512 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:04.512 18:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.512 18:12:53 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:04.512 18:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.512 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:04.512 18:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.512 18:12:53 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:04.512 18:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.512 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:04.512 18:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.512 18:12:53 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:04.512 18:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.512 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:04.512 18:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.512 18:12:53 -- host/multicontroller.sh@44 -- # bdevperf_pid=3398366 00:26:04.512 18:12:53 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:04.512 18:12:53 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:04.512 18:12:53 -- host/multicontroller.sh@47 -- # waitforlisten 3398366 /var/tmp/bdevperf.sock 00:26:04.512 18:12:53 -- common/autotest_common.sh@817 -- # '[' -z 3398366 ']' 00:26:04.512 18:12:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:04.512 18:12:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:04.512 18:12:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:04.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:04.512 18:12:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:04.512 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:04.771 18:12:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:04.771 18:12:53 -- common/autotest_common.sh@850 -- # return 0 00:26:04.771 18:12:53 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:04.771 18:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.772 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:05.031 NVMe0n1 00:26:05.031 18:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:05.031 18:12:53 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:05.031 18:12:53 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:05.031 18:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.031 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:05.031 18:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:05.031 1 00:26:05.031 18:12:53 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:05.031 18:12:53 -- common/autotest_common.sh@638 -- # local es=0 00:26:05.031 18:12:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:05.031 18:12:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:05.031 18:12:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:05.031 18:12:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:05.031 18:12:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:05.031 18:12:53 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:05.031 18:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.031 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:05.031 request: 00:26:05.031 { 00:26:05.031 "name": "NVMe0", 00:26:05.031 "trtype": "tcp", 00:26:05.031 "traddr": "10.0.0.2", 00:26:05.031 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:05.031 "hostaddr": "10.0.0.2", 00:26:05.031 "hostsvcid": "60000", 00:26:05.031 "adrfam": "ipv4", 00:26:05.031 "trsvcid": "4420", 00:26:05.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:05.031 "method": "bdev_nvme_attach_controller", 00:26:05.031 "req_id": 1 00:26:05.031 } 00:26:05.031 Got JSON-RPC error response 00:26:05.031 response: 00:26:05.031 { 00:26:05.031 "code": -114, 00:26:05.031 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:05.031 } 00:26:05.031 18:12:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:05.031 18:12:53 -- common/autotest_common.sh@641 -- # es=1 00:26:05.031 18:12:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:05.031 18:12:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:05.031 18:12:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:05.032 18:12:53 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:05.032 18:12:53 -- common/autotest_common.sh@638 -- # local es=0 00:26:05.032 18:12:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:05.032 18:12:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:05.032 18:12:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:05.032 18:12:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:05.032 18:12:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:05.032 18:12:53 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:05.032 18:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.032 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:05.032 request: 00:26:05.032 { 00:26:05.032 "name": "NVMe0", 00:26:05.032 "trtype": "tcp", 00:26:05.032 "traddr": "10.0.0.2", 00:26:05.032 "hostaddr": "10.0.0.2", 00:26:05.032 "hostsvcid": "60000", 00:26:05.032 "adrfam": "ipv4", 00:26:05.032 "trsvcid": "4420", 00:26:05.032 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:05.032 "method": "bdev_nvme_attach_controller", 00:26:05.032 "req_id": 1 00:26:05.032 } 00:26:05.032 Got JSON-RPC error response 00:26:05.032 response: 00:26:05.032 { 00:26:05.032 "code": -114, 00:26:05.032 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:05.032 } 00:26:05.032 18:12:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:05.032 18:12:53 -- common/autotest_common.sh@641 -- # es=1 00:26:05.032 18:12:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:05.032 18:12:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:05.032 18:12:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:05.032 18:12:53 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:05.032 18:12:53 -- common/autotest_common.sh@638 -- # local es=0 00:26:05.032 18:12:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:05.032 18:12:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:05.032 18:12:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:05.032 18:12:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:05.032 18:12:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:05.032 18:12:53 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:05.032 18:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.032 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:05.032 request: 00:26:05.032 { 00:26:05.032 "name": "NVMe0", 00:26:05.032 "trtype": "tcp", 00:26:05.032 "traddr": "10.0.0.2", 00:26:05.032 "hostaddr": "10.0.0.2", 00:26:05.032 "hostsvcid": "60000", 00:26:05.032 "adrfam": "ipv4", 00:26:05.032 "trsvcid": "4420", 00:26:05.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:05.032 "multipath": "disable", 00:26:05.032 "method": "bdev_nvme_attach_controller", 00:26:05.032 "req_id": 1 00:26:05.032 } 00:26:05.032 Got JSON-RPC error response 00:26:05.032 response: 00:26:05.032 { 00:26:05.032 "code": -114, 00:26:05.032 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:26:05.032 } 00:26:05.032 18:12:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:05.032 18:12:53 -- common/autotest_common.sh@641 -- # es=1 00:26:05.032 18:12:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:05.032 18:12:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:05.032 18:12:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:05.032 18:12:53 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:05.032 18:12:53 -- common/autotest_common.sh@638 -- # local es=0 00:26:05.032 18:12:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:05.032 18:12:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:05.032 18:12:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:05.032 18:12:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:05.032 18:12:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:05.032 18:12:53 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:05.032 18:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.032 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:05.032 request: 00:26:05.032 { 00:26:05.032 "name": "NVMe0", 00:26:05.032 "trtype": "tcp", 00:26:05.032 "traddr": "10.0.0.2", 00:26:05.032 "hostaddr": "10.0.0.2", 00:26:05.032 "hostsvcid": "60000", 00:26:05.032 "adrfam": "ipv4", 00:26:05.032 "trsvcid": "4420", 00:26:05.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:05.032 "multipath": "failover", 00:26:05.032 "method": "bdev_nvme_attach_controller", 00:26:05.032 "req_id": 1 00:26:05.032 } 00:26:05.032 Got JSON-RPC error response 00:26:05.032 response: 00:26:05.032 { 00:26:05.032 "code": -114, 00:26:05.032 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:05.032 } 00:26:05.032 18:12:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:05.032 18:12:53 -- common/autotest_common.sh@641 -- # es=1 00:26:05.032 18:12:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:05.032 18:12:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:05.032 18:12:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:05.032 18:12:53 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:05.032 18:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.032 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:26:05.290 00:26:05.290 18:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:05.290 18:12:54 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:05.290 18:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.290 18:12:54 -- common/autotest_common.sh@10 -- # set +x 00:26:05.290 18:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:05.290 18:12:54 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:05.290 18:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.290 18:12:54 -- common/autotest_common.sh@10 -- # set +x 00:26:05.290 00:26:05.290 18:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:05.290 18:12:54 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:05.290 18:12:54 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:05.290 18:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.290 18:12:54 -- common/autotest_common.sh@10 -- # set +x 00:26:05.290 18:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:05.290 18:12:54 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:05.290 18:12:54 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:06.665 0 00:26:06.665 18:12:55 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:06.665 18:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:06.665 18:12:55 -- common/autotest_common.sh@10 -- # set +x 00:26:06.665 18:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:06.665 18:12:55 -- host/multicontroller.sh@100 -- # killprocess 3398366 00:26:06.665 18:12:55 -- common/autotest_common.sh@936 -- # '[' -z 3398366 ']' 00:26:06.665 18:12:55 -- common/autotest_common.sh@940 -- # kill -0 3398366 00:26:06.665 18:12:55 -- common/autotest_common.sh@941 -- # uname 00:26:06.665 18:12:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:06.665 18:12:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3398366 00:26:06.665 18:12:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:06.665 18:12:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:06.665 18:12:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3398366' 00:26:06.665 killing process with pid 3398366 00:26:06.665 18:12:55 -- common/autotest_common.sh@955 -- # kill 3398366 00:26:06.665 18:12:55 -- common/autotest_common.sh@960 -- # wait 3398366 00:26:06.665 18:12:55 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:06.665 18:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:06.665 18:12:55 -- common/autotest_common.sh@10 -- # set +x 00:26:06.665 18:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:06.665 18:12:55 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:06.665 18:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:06.665 18:12:55 -- common/autotest_common.sh@10 -- # set +x 00:26:06.665 18:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:06.665 18:12:55 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:06.665 18:12:55 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:06.665 18:12:55 -- common/autotest_common.sh@1598 -- # read -r file 00:26:06.665 18:12:55 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:06.665 18:12:55 -- common/autotest_common.sh@1597 -- # sort -u 00:26:06.665 18:12:55 -- common/autotest_common.sh@1599 -- # cat 00:26:06.665 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:06.665 [2024-04-15 18:12:53.365165] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:26:06.665 [2024-04-15 18:12:53.365273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398366 ] 00:26:06.665 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.665 [2024-04-15 18:12:53.438526] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.665 [2024-04-15 18:12:53.524446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.665 [2024-04-15 18:12:54.118462] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 346342a2-485f-4916-a850-e040a309251b already exists 00:26:06.665 [2024-04-15 18:12:54.118504] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:346342a2-485f-4916-a850-e040a309251b alias for bdev NVMe1n1 00:26:06.665 [2024-04-15 18:12:54.118524] bdev_nvme.c:4264:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:06.665 Running I/O for 1 seconds... 00:26:06.665 00:26:06.665 Latency(us) 00:26:06.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.665 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:06.665 NVMe0n1 : 1.00 18905.53 73.85 0.00 0.00 6753.10 2026.76 11650.84 00:26:06.665 =================================================================================================================== 00:26:06.665 Total : 18905.53 73.85 0.00 0.00 6753.10 2026.76 11650.84 00:26:06.665 Received shutdown signal, test time was about 1.000000 seconds 00:26:06.665 00:26:06.665 Latency(us) 00:26:06.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.665 =================================================================================================================== 00:26:06.665 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:06.665 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:06.665 18:12:55 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:06.665 18:12:55 -- common/autotest_common.sh@1598 -- # read -r file 00:26:06.665 18:12:55 -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:06.665 18:12:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:06.665 18:12:55 -- nvmf/common.sh@117 -- # sync 00:26:06.665 18:12:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:06.665 18:12:55 -- nvmf/common.sh@120 -- # set +e 00:26:06.665 18:12:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:06.665 18:12:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:06.665 rmmod nvme_tcp 00:26:06.665 rmmod nvme_fabrics 00:26:06.665 rmmod nvme_keyring 00:26:06.665 18:12:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:06.665 18:12:55 -- nvmf/common.sh@124 -- # set -e 00:26:06.665 18:12:55 -- nvmf/common.sh@125 -- # return 0 00:26:06.665 18:12:55 -- nvmf/common.sh@478 -- # '[' -n 3398344 ']' 00:26:06.665 18:12:55 -- nvmf/common.sh@479 -- # killprocess 3398344 00:26:06.665 18:12:55 -- common/autotest_common.sh@936 -- # '[' -z 3398344 ']' 00:26:06.665 18:12:55 -- common/autotest_common.sh@940 -- # kill -0 3398344 00:26:06.665 18:12:55 -- common/autotest_common.sh@941 -- # uname 00:26:06.665 18:12:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:06.665 18:12:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3398344 00:26:06.924 18:12:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:06.924 18:12:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:06.924 18:12:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3398344' 00:26:06.924 killing process with pid 3398344 00:26:06.924 18:12:55 -- common/autotest_common.sh@955 -- # kill 3398344 00:26:06.924 18:12:55 -- common/autotest_common.sh@960 -- # wait 3398344 00:26:07.183 18:12:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:07.183 18:12:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:07.183 18:12:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:07.183 18:12:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:07.183 18:12:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:07.183 18:12:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.183 18:12:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:07.183 18:12:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.091 18:12:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:09.091 00:26:09.091 real 0m7.534s 00:26:09.091 user 0m11.528s 00:26:09.092 sys 0m2.529s 00:26:09.092 18:12:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:09.092 18:12:57 -- common/autotest_common.sh@10 -- # set +x 00:26:09.092 ************************************ 00:26:09.092 END TEST nvmf_multicontroller 00:26:09.092 ************************************ 00:26:09.092 18:12:57 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:09.092 18:12:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:09.092 18:12:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:09.092 18:12:57 -- common/autotest_common.sh@10 -- # set +x 00:26:09.351 ************************************ 00:26:09.351 START TEST nvmf_aer 00:26:09.351 ************************************ 00:26:09.351 18:12:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:09.351 * Looking for test storage... 00:26:09.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:09.351 18:12:58 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.351 18:12:58 -- nvmf/common.sh@7 -- # uname -s 00:26:09.351 18:12:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.351 18:12:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.351 18:12:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.351 18:12:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.351 18:12:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.351 18:12:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.351 18:12:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.351 18:12:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.351 18:12:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.351 18:12:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.351 18:12:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:09.351 18:12:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:09.351 18:12:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.351 18:12:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.351 18:12:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:09.351 18:12:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:09.351 18:12:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:09.351 18:12:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.351 18:12:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.351 18:12:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.351 18:12:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.351 18:12:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.351 18:12:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.351 18:12:58 -- paths/export.sh@5 -- # export PATH 00:26:09.351 18:12:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.351 18:12:58 -- nvmf/common.sh@47 -- # : 0 00:26:09.351 18:12:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:09.351 18:12:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:09.351 18:12:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:09.351 18:12:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.351 18:12:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.351 18:12:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:09.351 18:12:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:09.351 18:12:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:09.351 18:12:58 -- host/aer.sh@11 -- # nvmftestinit 00:26:09.351 18:12:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:09.351 18:12:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:09.351 18:12:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:09.351 18:12:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:09.351 18:12:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:09.351 18:12:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.351 18:12:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:09.351 18:12:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.351 18:12:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:09.351 18:12:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:09.351 18:12:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:09.351 18:12:58 -- common/autotest_common.sh@10 -- # set +x 00:26:11.885 18:13:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:11.885 18:13:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:11.885 18:13:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:11.885 18:13:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:11.885 18:13:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:11.885 18:13:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:11.885 18:13:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:11.885 18:13:00 -- nvmf/common.sh@295 -- # net_devs=() 00:26:11.885 18:13:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:11.885 18:13:00 -- nvmf/common.sh@296 -- # e810=() 00:26:11.885 18:13:00 -- nvmf/common.sh@296 -- # local -ga e810 00:26:11.885 18:13:00 -- nvmf/common.sh@297 -- # x722=() 00:26:11.885 18:13:00 -- nvmf/common.sh@297 -- # local -ga x722 00:26:11.885 18:13:00 -- nvmf/common.sh@298 -- # mlx=() 00:26:11.885 18:13:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:11.885 18:13:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:11.885 18:13:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:11.885 18:13:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:11.885 18:13:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:11.885 18:13:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:11.885 18:13:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:11.885 18:13:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:11.885 18:13:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:11.885 18:13:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:11.885 18:13:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:11.885 18:13:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:11.885 18:13:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:11.885 18:13:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:11.885 18:13:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:11.885 18:13:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:11.885 18:13:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:11.885 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:11.885 18:13:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:11.885 18:13:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:11.885 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:11.885 18:13:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:11.885 18:13:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:11.885 18:13:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.885 18:13:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:11.885 18:13:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.885 18:13:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:11.885 Found net devices under 0000:84:00.0: cvl_0_0 00:26:11.885 18:13:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.885 18:13:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:11.885 18:13:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.885 18:13:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:11.885 18:13:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.885 18:13:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:11.885 Found net devices under 0000:84:00.1: cvl_0_1 00:26:11.885 18:13:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.885 18:13:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:11.885 18:13:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:11.885 18:13:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:11.885 18:13:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:11.885 18:13:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:11.885 18:13:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:11.885 18:13:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:11.885 18:13:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:11.885 18:13:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:11.885 18:13:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:11.885 18:13:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:11.885 18:13:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:11.885 18:13:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:11.885 18:13:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:11.885 18:13:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:11.885 18:13:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:11.885 18:13:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:11.885 18:13:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:11.885 18:13:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:11.885 18:13:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:11.885 18:13:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:11.885 18:13:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:11.885 18:13:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:11.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:11.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:26:11.885 00:26:11.885 --- 10.0.0.2 ping statistics --- 00:26:11.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.885 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:26:11.885 18:13:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:11.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:11.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:26:11.885 00:26:11.885 --- 10.0.0.1 ping statistics --- 00:26:11.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.885 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:26:11.885 18:13:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:11.885 18:13:00 -- nvmf/common.sh@411 -- # return 0 00:26:11.885 18:13:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:11.885 18:13:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:11.885 18:13:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:11.885 18:13:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:11.885 18:13:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:11.885 18:13:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:11.885 18:13:00 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:11.885 18:13:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:11.885 18:13:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:11.886 18:13:00 -- common/autotest_common.sh@10 -- # set +x 00:26:11.886 18:13:00 -- nvmf/common.sh@470 -- # nvmfpid=3400603 00:26:11.886 18:13:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:11.886 18:13:00 -- nvmf/common.sh@471 -- # waitforlisten 3400603 00:26:11.886 18:13:00 -- common/autotest_common.sh@817 -- # '[' -z 3400603 ']' 00:26:11.886 18:13:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.886 18:13:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:11.886 18:13:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.886 18:13:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:11.886 18:13:00 -- common/autotest_common.sh@10 -- # set +x 00:26:11.886 [2024-04-15 18:13:00.746260] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:26:11.886 [2024-04-15 18:13:00.746342] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.886 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.886 [2024-04-15 18:13:00.823608] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:12.144 [2024-04-15 18:13:00.918896] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.144 [2024-04-15 18:13:00.918945] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.145 [2024-04-15 18:13:00.918962] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.145 [2024-04-15 18:13:00.918977] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.145 [2024-04-15 18:13:00.918990] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.145 [2024-04-15 18:13:00.919097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.145 [2024-04-15 18:13:00.919170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:12.145 [2024-04-15 18:13:00.919222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:12.145 [2024-04-15 18:13:00.919224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.145 18:13:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:12.145 18:13:01 -- common/autotest_common.sh@850 -- # return 0 00:26:12.145 18:13:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:12.145 18:13:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:12.145 18:13:01 -- common/autotest_common.sh@10 -- # set +x 00:26:12.145 18:13:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.145 18:13:01 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:12.145 18:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.145 18:13:01 -- common/autotest_common.sh@10 -- # set +x 00:26:12.145 [2024-04-15 18:13:01.077051] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.145 18:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.145 18:13:01 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:12.145 18:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.145 18:13:01 -- common/autotest_common.sh@10 -- # set +x 00:26:12.403 Malloc0 00:26:12.403 18:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.403 18:13:01 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:12.403 18:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.403 18:13:01 -- common/autotest_common.sh@10 -- # set +x 00:26:12.403 18:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.403 18:13:01 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:12.403 18:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.403 18:13:01 -- common/autotest_common.sh@10 -- # set +x 00:26:12.403 18:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.403 18:13:01 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:12.403 18:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.403 18:13:01 -- common/autotest_common.sh@10 -- # set +x 00:26:12.403 [2024-04-15 18:13:01.131336] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.403 18:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.403 18:13:01 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:12.403 18:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.403 18:13:01 -- common/autotest_common.sh@10 -- # set +x 00:26:12.403 [2024-04-15 18:13:01.139047] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:12.403 [ 00:26:12.403 { 00:26:12.403 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:12.403 "subtype": "Discovery", 00:26:12.403 "listen_addresses": [], 00:26:12.403 "allow_any_host": true, 00:26:12.403 "hosts": [] 00:26:12.403 }, 00:26:12.403 { 00:26:12.403 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:12.403 "subtype": "NVMe", 00:26:12.403 "listen_addresses": [ 00:26:12.403 { 00:26:12.403 "transport": "TCP", 00:26:12.403 "trtype": "TCP", 00:26:12.403 "adrfam": "IPv4", 00:26:12.403 "traddr": "10.0.0.2", 00:26:12.403 "trsvcid": "4420" 00:26:12.403 } 00:26:12.403 ], 00:26:12.403 "allow_any_host": true, 00:26:12.403 "hosts": [], 00:26:12.403 "serial_number": "SPDK00000000000001", 00:26:12.403 "model_number": "SPDK bdev Controller", 00:26:12.403 "max_namespaces": 2, 00:26:12.403 "min_cntlid": 1, 00:26:12.403 "max_cntlid": 65519, 00:26:12.403 "namespaces": [ 00:26:12.403 { 00:26:12.403 "nsid": 1, 00:26:12.403 "bdev_name": "Malloc0", 00:26:12.403 "name": "Malloc0", 00:26:12.403 "nguid": "BBF286D8C4794FF7B3A5DF05B696DB54", 00:26:12.403 "uuid": "bbf286d8-c479-4ff7-b3a5-df05b696db54" 00:26:12.403 } 00:26:12.403 ] 00:26:12.403 } 00:26:12.403 ] 00:26:12.403 18:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.403 18:13:01 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:12.403 18:13:01 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:12.403 18:13:01 -- host/aer.sh@33 -- # aerpid=3400750 00:26:12.403 18:13:01 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:12.403 18:13:01 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:12.403 18:13:01 -- common/autotest_common.sh@1251 -- # local i=0 00:26:12.403 18:13:01 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:12.403 18:13:01 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:26:12.403 18:13:01 -- common/autotest_common.sh@1254 -- # i=1 00:26:12.403 18:13:01 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:26:12.403 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.403 18:13:01 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:12.403 18:13:01 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:26:12.403 18:13:01 -- common/autotest_common.sh@1254 -- # i=2 00:26:12.403 18:13:01 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:26:12.403 18:13:01 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:12.403 18:13:01 -- common/autotest_common.sh@1253 -- # '[' 2 -lt 200 ']' 00:26:12.403 18:13:01 -- common/autotest_common.sh@1254 -- # i=3 00:26:12.403 18:13:01 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:26:12.662 18:13:01 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:12.662 18:13:01 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:12.662 18:13:01 -- common/autotest_common.sh@1262 -- # return 0 00:26:12.662 18:13:01 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:12.662 18:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.662 18:13:01 -- common/autotest_common.sh@10 -- # set +x 00:26:12.662 Malloc1 00:26:12.662 18:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.662 18:13:01 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:12.662 18:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.662 18:13:01 -- common/autotest_common.sh@10 -- # set +x 00:26:12.662 18:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.662 18:13:01 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:12.662 18:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.662 18:13:01 -- common/autotest_common.sh@10 -- # set +x 00:26:12.662 Asynchronous Event Request test 00:26:12.662 Attaching to 10.0.0.2 00:26:12.662 Attached to 10.0.0.2 00:26:12.662 Registering asynchronous event callbacks... 00:26:12.662 Starting namespace attribute notice tests for all controllers... 00:26:12.662 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:12.662 aer_cb - Changed Namespace 00:26:12.662 Cleaning up... 00:26:12.662 [ 00:26:12.662 { 00:26:12.662 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:12.662 "subtype": "Discovery", 00:26:12.662 "listen_addresses": [], 00:26:12.662 "allow_any_host": true, 00:26:12.662 "hosts": [] 00:26:12.662 }, 00:26:12.662 { 00:26:12.662 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:12.662 "subtype": "NVMe", 00:26:12.662 "listen_addresses": [ 00:26:12.662 { 00:26:12.662 "transport": "TCP", 00:26:12.662 "trtype": "TCP", 00:26:12.662 "adrfam": "IPv4", 00:26:12.662 "traddr": "10.0.0.2", 00:26:12.662 "trsvcid": "4420" 00:26:12.662 } 00:26:12.662 ], 00:26:12.662 "allow_any_host": true, 00:26:12.662 "hosts": [], 00:26:12.662 "serial_number": "SPDK00000000000001", 00:26:12.662 "model_number": "SPDK bdev Controller", 00:26:12.662 "max_namespaces": 2, 00:26:12.662 "min_cntlid": 1, 00:26:12.662 "max_cntlid": 65519, 00:26:12.662 "namespaces": [ 00:26:12.662 { 00:26:12.662 "nsid": 1, 00:26:12.662 "bdev_name": "Malloc0", 00:26:12.662 "name": "Malloc0", 00:26:12.662 "nguid": "BBF286D8C4794FF7B3A5DF05B696DB54", 00:26:12.662 "uuid": "bbf286d8-c479-4ff7-b3a5-df05b696db54" 00:26:12.662 }, 00:26:12.662 { 00:26:12.662 "nsid": 2, 00:26:12.662 "bdev_name": "Malloc1", 00:26:12.662 "name": "Malloc1", 00:26:12.662 "nguid": "F8D4F81BE60A406B906CDCE9AC62A49E", 00:26:12.662 "uuid": "f8d4f81b-e60a-406b-906c-dce9ac62a49e" 00:26:12.662 } 00:26:12.662 ] 00:26:12.662 } 00:26:12.662 ] 00:26:12.662 18:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.662 18:13:01 -- host/aer.sh@43 -- # wait 3400750 00:26:12.662 18:13:01 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:12.662 18:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.662 18:13:01 -- common/autotest_common.sh@10 -- # set +x 00:26:12.662 18:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.662 18:13:01 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:12.662 18:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.662 18:13:01 -- common/autotest_common.sh@10 -- # set +x 00:26:12.662 18:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.662 18:13:01 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:12.662 18:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.662 18:13:01 -- common/autotest_common.sh@10 -- # set +x 00:26:12.662 18:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.662 18:13:01 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:12.662 18:13:01 -- host/aer.sh@51 -- # nvmftestfini 00:26:12.662 18:13:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:12.662 18:13:01 -- nvmf/common.sh@117 -- # sync 00:26:12.662 18:13:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:12.662 18:13:01 -- nvmf/common.sh@120 -- # set +e 00:26:12.662 18:13:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:12.662 18:13:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:12.921 rmmod nvme_tcp 00:26:12.921 rmmod nvme_fabrics 00:26:12.921 rmmod nvme_keyring 00:26:12.921 18:13:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:12.921 18:13:01 -- nvmf/common.sh@124 -- # set -e 00:26:12.921 18:13:01 -- nvmf/common.sh@125 -- # return 0 00:26:12.921 18:13:01 -- nvmf/common.sh@478 -- # '[' -n 3400603 ']' 00:26:12.921 18:13:01 -- nvmf/common.sh@479 -- # killprocess 3400603 00:26:12.921 18:13:01 -- common/autotest_common.sh@936 -- # '[' -z 3400603 ']' 00:26:12.921 18:13:01 -- common/autotest_common.sh@940 -- # kill -0 3400603 00:26:12.921 18:13:01 -- common/autotest_common.sh@941 -- # uname 00:26:12.921 18:13:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:12.921 18:13:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3400603 00:26:12.921 18:13:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:12.921 18:13:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:12.921 18:13:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3400603' 00:26:12.921 killing process with pid 3400603 00:26:12.921 18:13:01 -- common/autotest_common.sh@955 -- # kill 3400603 00:26:12.921 [2024-04-15 18:13:01.717153] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:12.921 18:13:01 -- common/autotest_common.sh@960 -- # wait 3400603 00:26:13.180 18:13:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:13.180 18:13:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:13.180 18:13:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:13.180 18:13:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:13.180 18:13:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:13.180 18:13:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.180 18:13:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.180 18:13:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.085 18:13:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:15.085 00:26:15.085 real 0m5.881s 00:26:15.085 user 0m4.825s 00:26:15.085 sys 0m2.280s 00:26:15.085 18:13:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:15.085 18:13:03 -- common/autotest_common.sh@10 -- # set +x 00:26:15.085 ************************************ 00:26:15.085 END TEST nvmf_aer 00:26:15.085 ************************************ 00:26:15.085 18:13:04 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:15.085 18:13:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:15.085 18:13:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:15.085 18:13:04 -- common/autotest_common.sh@10 -- # set +x 00:26:15.344 ************************************ 00:26:15.344 START TEST nvmf_async_init 00:26:15.344 ************************************ 00:26:15.344 18:13:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:15.344 * Looking for test storage... 00:26:15.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:15.344 18:13:04 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.344 18:13:04 -- nvmf/common.sh@7 -- # uname -s 00:26:15.344 18:13:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.344 18:13:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.344 18:13:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.344 18:13:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.344 18:13:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.344 18:13:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.344 18:13:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.344 18:13:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.344 18:13:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.344 18:13:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.344 18:13:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:15.344 18:13:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:15.344 18:13:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.344 18:13:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.344 18:13:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.344 18:13:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.344 18:13:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.344 18:13:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.344 18:13:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.344 18:13:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.344 18:13:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.344 18:13:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.344 18:13:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.344 18:13:04 -- paths/export.sh@5 -- # export PATH 00:26:15.344 18:13:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.344 18:13:04 -- nvmf/common.sh@47 -- # : 0 00:26:15.344 18:13:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:15.344 18:13:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:15.344 18:13:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.344 18:13:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.344 18:13:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.344 18:13:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:15.344 18:13:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:15.344 18:13:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:15.344 18:13:04 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:15.344 18:13:04 -- host/async_init.sh@14 -- # null_block_size=512 00:26:15.344 18:13:04 -- host/async_init.sh@15 -- # null_bdev=null0 00:26:15.344 18:13:04 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:15.344 18:13:04 -- host/async_init.sh@20 -- # uuidgen 00:26:15.344 18:13:04 -- host/async_init.sh@20 -- # tr -d - 00:26:15.344 18:13:04 -- host/async_init.sh@20 -- # nguid=edf669df67b34ff2bc6474efd69f053c 00:26:15.344 18:13:04 -- host/async_init.sh@22 -- # nvmftestinit 00:26:15.344 18:13:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:15.344 18:13:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.344 18:13:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:15.344 18:13:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:15.344 18:13:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:15.344 18:13:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.344 18:13:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:15.344 18:13:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.344 18:13:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:15.344 18:13:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:15.344 18:13:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:15.344 18:13:04 -- common/autotest_common.sh@10 -- # set +x 00:26:17.937 18:13:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:17.937 18:13:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:17.937 18:13:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:17.937 18:13:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:17.937 18:13:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:17.937 18:13:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:17.937 18:13:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:17.937 18:13:06 -- nvmf/common.sh@295 -- # net_devs=() 00:26:17.937 18:13:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:17.937 18:13:06 -- nvmf/common.sh@296 -- # e810=() 00:26:17.937 18:13:06 -- nvmf/common.sh@296 -- # local -ga e810 00:26:17.937 18:13:06 -- nvmf/common.sh@297 -- # x722=() 00:26:17.937 18:13:06 -- nvmf/common.sh@297 -- # local -ga x722 00:26:17.937 18:13:06 -- nvmf/common.sh@298 -- # mlx=() 00:26:17.937 18:13:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:17.937 18:13:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.937 18:13:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.937 18:13:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.937 18:13:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.937 18:13:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.937 18:13:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.937 18:13:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.937 18:13:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.937 18:13:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.937 18:13:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.937 18:13:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.937 18:13:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:17.937 18:13:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:17.937 18:13:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:17.937 18:13:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:17.937 18:13:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:17.937 18:13:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:17.937 18:13:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:17.937 18:13:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:17.937 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:17.937 18:13:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:17.937 18:13:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:17.937 18:13:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.937 18:13:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.937 18:13:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:17.937 18:13:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:17.937 18:13:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:17.937 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:17.937 18:13:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:17.937 18:13:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:17.937 18:13:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.937 18:13:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.937 18:13:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:17.937 18:13:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:17.937 18:13:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:17.937 18:13:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:17.937 18:13:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:17.937 18:13:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.937 18:13:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:17.937 18:13:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.937 18:13:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:17.937 Found net devices under 0000:84:00.0: cvl_0_0 00:26:17.937 18:13:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.937 18:13:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:17.937 18:13:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.937 18:13:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:17.937 18:13:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.937 18:13:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:17.937 Found net devices under 0000:84:00.1: cvl_0_1 00:26:17.937 18:13:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.937 18:13:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:17.937 18:13:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:17.937 18:13:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:17.937 18:13:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:17.937 18:13:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:17.937 18:13:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.937 18:13:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.937 18:13:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.937 18:13:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:17.937 18:13:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.937 18:13:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.937 18:13:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:17.937 18:13:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.937 18:13:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.937 18:13:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:17.937 18:13:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:17.937 18:13:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.937 18:13:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:17.937 18:13:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:17.937 18:13:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:17.937 18:13:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:17.937 18:13:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:17.937 18:13:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:17.937 18:13:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:17.937 18:13:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:17.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:26:17.937 00:26:17.937 --- 10.0.0.2 ping statistics --- 00:26:17.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.937 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:26:17.937 18:13:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:17.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:26:17.937 00:26:17.937 --- 10.0.0.1 ping statistics --- 00:26:17.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.937 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:26:17.937 18:13:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.938 18:13:06 -- nvmf/common.sh@411 -- # return 0 00:26:17.938 18:13:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:17.938 18:13:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.938 18:13:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:17.938 18:13:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:17.938 18:13:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.938 18:13:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:17.938 18:13:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:17.938 18:13:06 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:17.938 18:13:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:17.938 18:13:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:17.938 18:13:06 -- common/autotest_common.sh@10 -- # set +x 00:26:17.938 18:13:06 -- nvmf/common.sh@470 -- # nvmfpid=3402828 00:26:17.938 18:13:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:17.938 18:13:06 -- nvmf/common.sh@471 -- # waitforlisten 3402828 00:26:17.938 18:13:06 -- common/autotest_common.sh@817 -- # '[' -z 3402828 ']' 00:26:17.938 18:13:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.938 18:13:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:17.938 18:13:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.938 18:13:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:17.938 18:13:06 -- common/autotest_common.sh@10 -- # set +x 00:26:17.938 [2024-04-15 18:13:06.876353] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:26:17.938 [2024-04-15 18:13:06.876467] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.197 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.197 [2024-04-15 18:13:06.958824] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.197 [2024-04-15 18:13:07.055316] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.197 [2024-04-15 18:13:07.055386] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.197 [2024-04-15 18:13:07.055404] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:18.197 [2024-04-15 18:13:07.055418] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:18.197 [2024-04-15 18:13:07.055430] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.197 [2024-04-15 18:13:07.055473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.457 18:13:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:18.457 18:13:07 -- common/autotest_common.sh@850 -- # return 0 00:26:18.457 18:13:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:18.457 18:13:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:18.457 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.457 18:13:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.457 18:13:07 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:18.457 18:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.457 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.457 [2024-04-15 18:13:07.205652] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.457 18:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.457 18:13:07 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:18.457 18:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.457 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.457 null0 00:26:18.457 18:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.457 18:13:07 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:18.457 18:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.457 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.457 18:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.457 18:13:07 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:18.457 18:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.457 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.457 18:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.457 18:13:07 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g edf669df67b34ff2bc6474efd69f053c 00:26:18.457 18:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.457 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.457 18:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.457 18:13:07 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:18.457 18:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.457 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.457 [2024-04-15 18:13:07.245924] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.457 18:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.457 18:13:07 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:18.457 18:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.457 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.718 nvme0n1 00:26:18.718 18:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.718 18:13:07 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:18.718 18:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.718 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.718 [ 00:26:18.718 { 00:26:18.718 "name": "nvme0n1", 00:26:18.718 "aliases": [ 00:26:18.718 "edf669df-67b3-4ff2-bc64-74efd69f053c" 00:26:18.718 ], 00:26:18.718 "product_name": "NVMe disk", 00:26:18.718 "block_size": 512, 00:26:18.718 "num_blocks": 2097152, 00:26:18.718 "uuid": "edf669df-67b3-4ff2-bc64-74efd69f053c", 00:26:18.718 "assigned_rate_limits": { 00:26:18.718 "rw_ios_per_sec": 0, 00:26:18.718 "rw_mbytes_per_sec": 0, 00:26:18.718 "r_mbytes_per_sec": 0, 00:26:18.718 "w_mbytes_per_sec": 0 00:26:18.718 }, 00:26:18.718 "claimed": false, 00:26:18.718 "zoned": false, 00:26:18.718 "supported_io_types": { 00:26:18.718 "read": true, 00:26:18.718 "write": true, 00:26:18.718 "unmap": false, 00:26:18.718 "write_zeroes": true, 00:26:18.718 "flush": true, 00:26:18.718 "reset": true, 00:26:18.718 "compare": true, 00:26:18.718 "compare_and_write": true, 00:26:18.718 "abort": true, 00:26:18.718 "nvme_admin": true, 00:26:18.718 "nvme_io": true 00:26:18.718 }, 00:26:18.718 "memory_domains": [ 00:26:18.718 { 00:26:18.718 "dma_device_id": "system", 00:26:18.718 "dma_device_type": 1 00:26:18.718 } 00:26:18.718 ], 00:26:18.718 "driver_specific": { 00:26:18.718 "nvme": [ 00:26:18.718 { 00:26:18.718 "trid": { 00:26:18.718 "trtype": "TCP", 00:26:18.718 "adrfam": "IPv4", 00:26:18.718 "traddr": "10.0.0.2", 00:26:18.718 "trsvcid": "4420", 00:26:18.718 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:18.718 }, 00:26:18.718 "ctrlr_data": { 00:26:18.718 "cntlid": 1, 00:26:18.718 "vendor_id": "0x8086", 00:26:18.718 "model_number": "SPDK bdev Controller", 00:26:18.718 "serial_number": "00000000000000000000", 00:26:18.718 "firmware_revision": "24.05", 00:26:18.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:18.718 "oacs": { 00:26:18.718 "security": 0, 00:26:18.718 "format": 0, 00:26:18.718 "firmware": 0, 00:26:18.718 "ns_manage": 0 00:26:18.718 }, 00:26:18.718 "multi_ctrlr": true, 00:26:18.718 "ana_reporting": false 00:26:18.718 }, 00:26:18.718 "vs": { 00:26:18.718 "nvme_version": "1.3" 00:26:18.718 }, 00:26:18.718 "ns_data": { 00:26:18.718 "id": 1, 00:26:18.718 "can_share": true 00:26:18.718 } 00:26:18.718 } 00:26:18.718 ], 00:26:18.718 "mp_policy": "active_passive" 00:26:18.718 } 00:26:18.718 } 00:26:18.718 ] 00:26:18.718 18:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.718 18:13:07 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:18.718 18:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.718 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.718 [2024-04-15 18:13:07.498492] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:18.718 [2024-04-15 18:13:07.498592] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe447b0 (9): Bad file descriptor 00:26:18.718 [2024-04-15 18:13:07.641222] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:18.718 18:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.718 18:13:07 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:18.718 18:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.718 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.718 [ 00:26:18.718 { 00:26:18.718 "name": "nvme0n1", 00:26:18.718 "aliases": [ 00:26:18.718 "edf669df-67b3-4ff2-bc64-74efd69f053c" 00:26:18.718 ], 00:26:18.718 "product_name": "NVMe disk", 00:26:18.718 "block_size": 512, 00:26:18.718 "num_blocks": 2097152, 00:26:18.718 "uuid": "edf669df-67b3-4ff2-bc64-74efd69f053c", 00:26:18.718 "assigned_rate_limits": { 00:26:18.718 "rw_ios_per_sec": 0, 00:26:18.718 "rw_mbytes_per_sec": 0, 00:26:18.718 "r_mbytes_per_sec": 0, 00:26:18.718 "w_mbytes_per_sec": 0 00:26:18.718 }, 00:26:18.718 "claimed": false, 00:26:18.718 "zoned": false, 00:26:18.718 "supported_io_types": { 00:26:18.718 "read": true, 00:26:18.718 "write": true, 00:26:18.718 "unmap": false, 00:26:18.718 "write_zeroes": true, 00:26:18.718 "flush": true, 00:26:18.718 "reset": true, 00:26:18.718 "compare": true, 00:26:18.718 "compare_and_write": true, 00:26:18.718 "abort": true, 00:26:18.718 "nvme_admin": true, 00:26:18.718 "nvme_io": true 00:26:18.718 }, 00:26:18.718 "memory_domains": [ 00:26:18.718 { 00:26:18.718 "dma_device_id": "system", 00:26:18.718 "dma_device_type": 1 00:26:18.718 } 00:26:18.718 ], 00:26:18.718 "driver_specific": { 00:26:18.718 "nvme": [ 00:26:18.718 { 00:26:18.718 "trid": { 00:26:18.718 "trtype": "TCP", 00:26:18.718 "adrfam": "IPv4", 00:26:18.718 "traddr": "10.0.0.2", 00:26:18.718 "trsvcid": "4420", 00:26:18.718 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:18.718 }, 00:26:18.718 "ctrlr_data": { 00:26:18.718 "cntlid": 2, 00:26:18.718 "vendor_id": "0x8086", 00:26:18.718 "model_number": "SPDK bdev Controller", 00:26:18.718 "serial_number": "00000000000000000000", 00:26:18.718 "firmware_revision": "24.05", 00:26:18.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:18.718 "oacs": { 00:26:18.718 "security": 0, 00:26:18.718 "format": 0, 00:26:18.718 "firmware": 0, 00:26:18.718 "ns_manage": 0 00:26:18.718 }, 00:26:18.718 "multi_ctrlr": true, 00:26:18.718 "ana_reporting": false 00:26:18.718 }, 00:26:18.718 "vs": { 00:26:18.718 "nvme_version": "1.3" 00:26:18.718 }, 00:26:18.718 "ns_data": { 00:26:18.718 "id": 1, 00:26:18.718 "can_share": true 00:26:18.718 } 00:26:18.718 } 00:26:18.718 ], 00:26:18.718 "mp_policy": "active_passive" 00:26:18.718 } 00:26:18.718 } 00:26:18.718 ] 00:26:18.718 18:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.718 18:13:07 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.718 18:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.718 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.979 18:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.979 18:13:07 -- host/async_init.sh@53 -- # mktemp 00:26:18.979 18:13:07 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.kazH6qEYJQ 00:26:18.979 18:13:07 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:18.979 18:13:07 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.kazH6qEYJQ 00:26:18.979 18:13:07 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:18.979 18:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.979 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.979 18:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.979 18:13:07 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:18.979 18:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.979 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.979 [2024-04-15 18:13:07.691141] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:18.979 [2024-04-15 18:13:07.691294] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:18.979 18:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.979 18:13:07 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kazH6qEYJQ 00:26:18.979 18:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.979 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.979 [2024-04-15 18:13:07.699151] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:18.979 18:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.979 18:13:07 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kazH6qEYJQ 00:26:18.979 18:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.979 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.979 [2024-04-15 18:13:07.707162] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:18.979 [2024-04-15 18:13:07.707227] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:18.979 nvme0n1 00:26:18.979 18:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.979 18:13:07 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:18.979 18:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.979 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.979 [ 00:26:18.979 { 00:26:18.979 "name": "nvme0n1", 00:26:18.979 "aliases": [ 00:26:18.979 "edf669df-67b3-4ff2-bc64-74efd69f053c" 00:26:18.979 ], 00:26:18.979 "product_name": "NVMe disk", 00:26:18.979 "block_size": 512, 00:26:18.979 "num_blocks": 2097152, 00:26:18.979 "uuid": "edf669df-67b3-4ff2-bc64-74efd69f053c", 00:26:18.979 "assigned_rate_limits": { 00:26:18.979 "rw_ios_per_sec": 0, 00:26:18.979 "rw_mbytes_per_sec": 0, 00:26:18.979 "r_mbytes_per_sec": 0, 00:26:18.979 "w_mbytes_per_sec": 0 00:26:18.979 }, 00:26:18.979 "claimed": false, 00:26:18.979 "zoned": false, 00:26:18.979 "supported_io_types": { 00:26:18.979 "read": true, 00:26:18.979 "write": true, 00:26:18.979 "unmap": false, 00:26:18.979 "write_zeroes": true, 00:26:18.979 "flush": true, 00:26:18.979 "reset": true, 00:26:18.979 "compare": true, 00:26:18.979 "compare_and_write": true, 00:26:18.979 "abort": true, 00:26:18.980 "nvme_admin": true, 00:26:18.980 "nvme_io": true 00:26:18.980 }, 00:26:18.980 "memory_domains": [ 00:26:18.980 { 00:26:18.980 "dma_device_id": "system", 00:26:18.980 "dma_device_type": 1 00:26:18.980 } 00:26:18.980 ], 00:26:18.980 "driver_specific": { 00:26:18.980 "nvme": [ 00:26:18.980 { 00:26:18.980 "trid": { 00:26:18.980 "trtype": "TCP", 00:26:18.980 "adrfam": "IPv4", 00:26:18.980 "traddr": "10.0.0.2", 00:26:18.980 "trsvcid": "4421", 00:26:18.980 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:18.980 }, 00:26:18.980 "ctrlr_data": { 00:26:18.980 "cntlid": 3, 00:26:18.980 "vendor_id": "0x8086", 00:26:18.980 "model_number": "SPDK bdev Controller", 00:26:18.980 "serial_number": "00000000000000000000", 00:26:18.980 "firmware_revision": "24.05", 00:26:18.980 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:18.980 "oacs": { 00:26:18.980 "security": 0, 00:26:18.980 "format": 0, 00:26:18.980 "firmware": 0, 00:26:18.980 "ns_manage": 0 00:26:18.980 }, 00:26:18.980 "multi_ctrlr": true, 00:26:18.980 "ana_reporting": false 00:26:18.980 }, 00:26:18.980 "vs": { 00:26:18.980 "nvme_version": "1.3" 00:26:18.980 }, 00:26:18.980 "ns_data": { 00:26:18.980 "id": 1, 00:26:18.980 "can_share": true 00:26:18.980 } 00:26:18.980 } 00:26:18.980 ], 00:26:18.980 "mp_policy": "active_passive" 00:26:18.980 } 00:26:18.980 } 00:26:18.980 ] 00:26:18.980 18:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.980 18:13:07 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.980 18:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.980 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.980 18:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.980 18:13:07 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.kazH6qEYJQ 00:26:18.980 18:13:07 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:26:18.980 18:13:07 -- host/async_init.sh@78 -- # nvmftestfini 00:26:18.980 18:13:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:18.980 18:13:07 -- nvmf/common.sh@117 -- # sync 00:26:18.980 18:13:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:18.980 18:13:07 -- nvmf/common.sh@120 -- # set +e 00:26:18.980 18:13:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:18.980 18:13:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:18.980 rmmod nvme_tcp 00:26:18.980 rmmod nvme_fabrics 00:26:18.980 rmmod nvme_keyring 00:26:18.980 18:13:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:18.980 18:13:07 -- nvmf/common.sh@124 -- # set -e 00:26:18.980 18:13:07 -- nvmf/common.sh@125 -- # return 0 00:26:18.980 18:13:07 -- nvmf/common.sh@478 -- # '[' -n 3402828 ']' 00:26:18.980 18:13:07 -- nvmf/common.sh@479 -- # killprocess 3402828 00:26:18.980 18:13:07 -- common/autotest_common.sh@936 -- # '[' -z 3402828 ']' 00:26:18.980 18:13:07 -- common/autotest_common.sh@940 -- # kill -0 3402828 00:26:18.980 18:13:07 -- common/autotest_common.sh@941 -- # uname 00:26:18.980 18:13:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:18.980 18:13:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3402828 00:26:18.980 18:13:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:18.980 18:13:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:18.980 18:13:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3402828' 00:26:18.980 killing process with pid 3402828 00:26:18.980 18:13:07 -- common/autotest_common.sh@955 -- # kill 3402828 00:26:18.980 [2024-04-15 18:13:07.906365] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:18.980 [2024-04-15 18:13:07.906412] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:18.980 18:13:07 -- common/autotest_common.sh@960 -- # wait 3402828 00:26:19.240 18:13:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:19.240 18:13:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:19.240 18:13:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:19.240 18:13:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:19.240 18:13:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:19.240 18:13:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.240 18:13:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.240 18:13:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.777 18:13:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:21.777 00:26:21.777 real 0m6.036s 00:26:21.777 user 0m2.190s 00:26:21.777 sys 0m2.274s 00:26:21.777 18:13:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:21.777 18:13:10 -- common/autotest_common.sh@10 -- # set +x 00:26:21.777 ************************************ 00:26:21.777 END TEST nvmf_async_init 00:26:21.777 ************************************ 00:26:21.777 18:13:10 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:21.777 18:13:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:21.777 18:13:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:21.777 18:13:10 -- common/autotest_common.sh@10 -- # set +x 00:26:21.777 ************************************ 00:26:21.777 START TEST dma 00:26:21.777 ************************************ 00:26:21.777 18:13:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:21.777 * Looking for test storage... 00:26:21.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:21.777 18:13:10 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.777 18:13:10 -- nvmf/common.sh@7 -- # uname -s 00:26:21.777 18:13:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.777 18:13:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.777 18:13:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.777 18:13:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.777 18:13:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.777 18:13:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.777 18:13:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.777 18:13:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.777 18:13:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.777 18:13:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.777 18:13:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:21.777 18:13:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:21.777 18:13:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.777 18:13:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.777 18:13:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.777 18:13:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.777 18:13:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.777 18:13:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.777 18:13:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.777 18:13:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.777 18:13:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.777 18:13:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.777 18:13:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.777 18:13:10 -- paths/export.sh@5 -- # export PATH 00:26:21.777 18:13:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.777 18:13:10 -- nvmf/common.sh@47 -- # : 0 00:26:21.777 18:13:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:21.777 18:13:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:21.777 18:13:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.777 18:13:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.777 18:13:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.777 18:13:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:21.777 18:13:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:21.777 18:13:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:21.777 18:13:10 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:21.777 18:13:10 -- host/dma.sh@13 -- # exit 0 00:26:21.777 00:26:21.777 real 0m0.069s 00:26:21.777 user 0m0.031s 00:26:21.777 sys 0m0.042s 00:26:21.777 18:13:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:21.777 18:13:10 -- common/autotest_common.sh@10 -- # set +x 00:26:21.777 ************************************ 00:26:21.777 END TEST dma 00:26:21.777 ************************************ 00:26:21.777 18:13:10 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:21.777 18:13:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:21.777 18:13:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:21.777 18:13:10 -- common/autotest_common.sh@10 -- # set +x 00:26:21.777 ************************************ 00:26:21.777 START TEST nvmf_identify 00:26:21.777 ************************************ 00:26:21.777 18:13:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:21.777 * Looking for test storage... 00:26:21.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:21.777 18:13:10 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.777 18:13:10 -- nvmf/common.sh@7 -- # uname -s 00:26:21.777 18:13:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.777 18:13:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.777 18:13:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.777 18:13:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.777 18:13:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.777 18:13:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.777 18:13:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.777 18:13:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.777 18:13:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.777 18:13:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.777 18:13:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:21.777 18:13:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:21.777 18:13:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.777 18:13:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.777 18:13:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.777 18:13:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.777 18:13:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.777 18:13:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.777 18:13:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.777 18:13:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.777 18:13:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.777 18:13:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.777 18:13:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.777 18:13:10 -- paths/export.sh@5 -- # export PATH 00:26:21.777 18:13:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.777 18:13:10 -- nvmf/common.sh@47 -- # : 0 00:26:21.777 18:13:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:21.777 18:13:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:21.777 18:13:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.777 18:13:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.777 18:13:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.777 18:13:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:21.777 18:13:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:21.777 18:13:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:21.777 18:13:10 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:21.777 18:13:10 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:21.777 18:13:10 -- host/identify.sh@14 -- # nvmftestinit 00:26:21.777 18:13:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:21.777 18:13:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.777 18:13:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:21.777 18:13:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:21.777 18:13:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:21.777 18:13:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.777 18:13:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:21.777 18:13:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.777 18:13:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:21.777 18:13:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:21.777 18:13:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:21.777 18:13:10 -- common/autotest_common.sh@10 -- # set +x 00:26:24.310 18:13:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:24.310 18:13:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:24.310 18:13:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:24.310 18:13:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:24.310 18:13:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:24.310 18:13:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:24.310 18:13:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:24.310 18:13:13 -- nvmf/common.sh@295 -- # net_devs=() 00:26:24.310 18:13:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:24.310 18:13:13 -- nvmf/common.sh@296 -- # e810=() 00:26:24.310 18:13:13 -- nvmf/common.sh@296 -- # local -ga e810 00:26:24.310 18:13:13 -- nvmf/common.sh@297 -- # x722=() 00:26:24.310 18:13:13 -- nvmf/common.sh@297 -- # local -ga x722 00:26:24.310 18:13:13 -- nvmf/common.sh@298 -- # mlx=() 00:26:24.310 18:13:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:24.310 18:13:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:24.310 18:13:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:24.310 18:13:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:24.310 18:13:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:24.310 18:13:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:24.310 18:13:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:24.310 18:13:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:24.310 18:13:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:24.310 18:13:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:24.310 18:13:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:24.310 18:13:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:24.310 18:13:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:24.310 18:13:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:24.310 18:13:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:24.310 18:13:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:24.310 18:13:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:24.310 18:13:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:24.310 18:13:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:24.310 18:13:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:24.310 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:24.310 18:13:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:24.310 18:13:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:24.310 18:13:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.310 18:13:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.310 18:13:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:24.310 18:13:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:24.310 18:13:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:24.310 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:24.310 18:13:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:24.310 18:13:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:24.310 18:13:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.310 18:13:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.310 18:13:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:24.310 18:13:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:24.310 18:13:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:24.310 18:13:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:24.310 18:13:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:24.310 18:13:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.310 18:13:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:24.310 18:13:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.310 18:13:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:24.311 Found net devices under 0000:84:00.0: cvl_0_0 00:26:24.311 18:13:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.311 18:13:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:24.311 18:13:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.311 18:13:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:24.311 18:13:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.311 18:13:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:24.311 Found net devices under 0000:84:00.1: cvl_0_1 00:26:24.311 18:13:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.311 18:13:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:24.311 18:13:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:24.311 18:13:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:24.311 18:13:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:24.311 18:13:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:24.311 18:13:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:24.311 18:13:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:24.311 18:13:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:24.311 18:13:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:24.311 18:13:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:24.311 18:13:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:24.311 18:13:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:24.311 18:13:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:24.311 18:13:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.311 18:13:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:24.311 18:13:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:24.311 18:13:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:24.311 18:13:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:24.311 18:13:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:24.311 18:13:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:24.311 18:13:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:24.311 18:13:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:24.311 18:13:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:24.311 18:13:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:24.311 18:13:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:24.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:26:24.311 00:26:24.311 --- 10.0.0.2 ping statistics --- 00:26:24.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.311 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:26:24.311 18:13:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:24.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:26:24.311 00:26:24.311 --- 10.0.0.1 ping statistics --- 00:26:24.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.311 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:26:24.311 18:13:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.311 18:13:13 -- nvmf/common.sh@411 -- # return 0 00:26:24.311 18:13:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:24.311 18:13:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.311 18:13:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:24.311 18:13:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:24.311 18:13:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.311 18:13:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:24.311 18:13:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:24.311 18:13:13 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:24.311 18:13:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:24.311 18:13:13 -- common/autotest_common.sh@10 -- # set +x 00:26:24.311 18:13:13 -- host/identify.sh@19 -- # nvmfpid=3404990 00:26:24.311 18:13:13 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:24.311 18:13:13 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:24.311 18:13:13 -- host/identify.sh@23 -- # waitforlisten 3404990 00:26:24.311 18:13:13 -- common/autotest_common.sh@817 -- # '[' -z 3404990 ']' 00:26:24.311 18:13:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.311 18:13:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:24.311 18:13:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.311 18:13:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:24.311 18:13:13 -- common/autotest_common.sh@10 -- # set +x 00:26:24.569 [2024-04-15 18:13:13.292332] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:26:24.569 [2024-04-15 18:13:13.292425] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.569 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.569 [2024-04-15 18:13:13.370230] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:24.569 [2024-04-15 18:13:13.466299] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.569 [2024-04-15 18:13:13.466362] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.569 [2024-04-15 18:13:13.466379] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.569 [2024-04-15 18:13:13.466394] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.569 [2024-04-15 18:13:13.466406] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.569 [2024-04-15 18:13:13.466500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.569 [2024-04-15 18:13:13.466549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:24.569 [2024-04-15 18:13:13.466570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:24.569 [2024-04-15 18:13:13.466573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.828 18:13:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:24.828 18:13:13 -- common/autotest_common.sh@850 -- # return 0 00:26:24.828 18:13:13 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:24.828 18:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.828 18:13:13 -- common/autotest_common.sh@10 -- # set +x 00:26:24.828 [2024-04-15 18:13:13.595992] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.828 18:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.828 18:13:13 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:24.828 18:13:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:24.828 18:13:13 -- common/autotest_common.sh@10 -- # set +x 00:26:24.828 18:13:13 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:24.828 18:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.828 18:13:13 -- common/autotest_common.sh@10 -- # set +x 00:26:24.828 Malloc0 00:26:24.828 18:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.828 18:13:13 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:24.828 18:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.828 18:13:13 -- common/autotest_common.sh@10 -- # set +x 00:26:24.828 18:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.828 18:13:13 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:24.828 18:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.828 18:13:13 -- common/autotest_common.sh@10 -- # set +x 00:26:24.828 18:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.828 18:13:13 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:24.828 18:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.828 18:13:13 -- common/autotest_common.sh@10 -- # set +x 00:26:24.828 [2024-04-15 18:13:13.678373] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.828 18:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.828 18:13:13 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:24.828 18:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.828 18:13:13 -- common/autotest_common.sh@10 -- # set +x 00:26:24.828 18:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.828 18:13:13 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:24.828 18:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.828 18:13:13 -- common/autotest_common.sh@10 -- # set +x 00:26:24.828 [2024-04-15 18:13:13.694110] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:24.828 [ 00:26:24.828 { 00:26:24.828 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:24.828 "subtype": "Discovery", 00:26:24.828 "listen_addresses": [ 00:26:24.828 { 00:26:24.828 "transport": "TCP", 00:26:24.828 "trtype": "TCP", 00:26:24.828 "adrfam": "IPv4", 00:26:24.828 "traddr": "10.0.0.2", 00:26:24.828 "trsvcid": "4420" 00:26:24.828 } 00:26:24.828 ], 00:26:24.828 "allow_any_host": true, 00:26:24.828 "hosts": [] 00:26:24.828 }, 00:26:24.828 { 00:26:24.828 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:24.828 "subtype": "NVMe", 00:26:24.828 "listen_addresses": [ 00:26:24.828 { 00:26:24.828 "transport": "TCP", 00:26:24.828 "trtype": "TCP", 00:26:24.828 "adrfam": "IPv4", 00:26:24.828 "traddr": "10.0.0.2", 00:26:24.828 "trsvcid": "4420" 00:26:24.828 } 00:26:24.828 ], 00:26:24.828 "allow_any_host": true, 00:26:24.828 "hosts": [], 00:26:24.828 "serial_number": "SPDK00000000000001", 00:26:24.828 "model_number": "SPDK bdev Controller", 00:26:24.828 "max_namespaces": 32, 00:26:24.828 "min_cntlid": 1, 00:26:24.828 "max_cntlid": 65519, 00:26:24.828 "namespaces": [ 00:26:24.828 { 00:26:24.828 "nsid": 1, 00:26:24.828 "bdev_name": "Malloc0", 00:26:24.828 "name": "Malloc0", 00:26:24.828 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:24.828 "eui64": "ABCDEF0123456789", 00:26:24.828 "uuid": "4d2eed9e-9523-4c1c-a653-26dfeb4faf4c" 00:26:24.828 } 00:26:24.828 ] 00:26:24.828 } 00:26:24.828 ] 00:26:24.828 18:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.828 18:13:13 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:24.828 [2024-04-15 18:13:13.718275] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:26:24.828 [2024-04-15 18:13:13.718324] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405137 ] 00:26:24.828 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.828 [2024-04-15 18:13:13.755809] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:24.828 [2024-04-15 18:13:13.755881] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:24.828 [2024-04-15 18:13:13.755893] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:24.828 [2024-04-15 18:13:13.755912] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:24.828 [2024-04-15 18:13:13.755929] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:24.828 [2024-04-15 18:13:13.759124] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:24.828 [2024-04-15 18:13:13.759191] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x214b1a0 0 00:26:24.828 [2024-04-15 18:13:13.766074] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:24.828 [2024-04-15 18:13:13.766109] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:24.828 [2024-04-15 18:13:13.766125] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:24.828 [2024-04-15 18:13:13.766133] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:24.828 [2024-04-15 18:13:13.766192] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:24.828 [2024-04-15 18:13:13.766207] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:24.828 [2024-04-15 18:13:13.766217] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x214b1a0) 00:26:24.828 [2024-04-15 18:13:13.766239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:24.828 [2024-04-15 18:13:13.766270] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4270, cid 0, qid 0 00:26:24.828 [2024-04-15 18:13:13.773073] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:24.828 [2024-04-15 18:13:13.773102] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:24.828 [2024-04-15 18:13:13.773111] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:24.828 [2024-04-15 18:13:13.773121] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4270) on tqpair=0x214b1a0 00:26:24.828 [2024-04-15 18:13:13.773142] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:24.828 [2024-04-15 18:13:13.773155] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:24.828 [2024-04-15 18:13:13.773166] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:24.828 [2024-04-15 18:13:13.773191] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:24.828 [2024-04-15 18:13:13.773202] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:24.828 [2024-04-15 18:13:13.773210] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x214b1a0) 00:26:24.828 [2024-04-15 18:13:13.773223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.828 [2024-04-15 18:13:13.773250] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4270, cid 0, qid 0 00:26:24.828 [2024-04-15 18:13:13.773418] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:24.828 [2024-04-15 18:13:13.773436] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:24.828 [2024-04-15 18:13:13.773444] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:24.828 [2024-04-15 18:13:13.773453] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4270) on tqpair=0x214b1a0 00:26:24.828 [2024-04-15 18:13:13.773465] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:24.828 [2024-04-15 18:13:13.773481] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:24.828 [2024-04-15 18:13:13.773496] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:24.828 [2024-04-15 18:13:13.773505] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:24.828 [2024-04-15 18:13:13.773513] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x214b1a0) 00:26:24.828 [2024-04-15 18:13:13.773526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.828 [2024-04-15 18:13:13.773550] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4270, cid 0, qid 0 00:26:24.828 [2024-04-15 18:13:13.773720] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:24.828 [2024-04-15 18:13:13.773734] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:24.828 [2024-04-15 18:13:13.773742] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:24.828 [2024-04-15 18:13:13.773750] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4270) on tqpair=0x214b1a0 00:26:24.828 [2024-04-15 18:13:13.773761] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:24.828 [2024-04-15 18:13:13.773783] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:24.828 [2024-04-15 18:13:13.773798] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:24.828 [2024-04-15 18:13:13.773807] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:24.828 [2024-04-15 18:13:13.773815] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x214b1a0) 00:26:24.829 [2024-04-15 18:13:13.773827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.829 [2024-04-15 18:13:13.773851] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4270, cid 0, qid 0 00:26:24.829 [2024-04-15 18:13:13.774020] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:24.829 [2024-04-15 18:13:13.774034] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:24.829 [2024-04-15 18:13:13.774042] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:24.829 [2024-04-15 18:13:13.774050] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4270) on tqpair=0x214b1a0 00:26:24.829 [2024-04-15 18:13:13.774071] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:24.829 [2024-04-15 18:13:13.774102] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:24.829 [2024-04-15 18:13:13.774113] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:24.829 [2024-04-15 18:13:13.774120] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x214b1a0) 00:26:24.829 [2024-04-15 18:13:13.774133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.829 [2024-04-15 18:13:13.774157] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4270, cid 0, qid 0 00:26:24.829 [2024-04-15 18:13:13.774330] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:24.829 [2024-04-15 18:13:13.774344] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:24.829 [2024-04-15 18:13:13.774352] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:24.829 [2024-04-15 18:13:13.774360] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4270) on tqpair=0x214b1a0 00:26:24.829 [2024-04-15 18:13:13.774372] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:24.829 [2024-04-15 18:13:13.774383] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:24.829 [2024-04-15 18:13:13.774398] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:24.829 [2024-04-15 18:13:13.774510] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:24.829 [2024-04-15 18:13:13.774520] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:24.829 [2024-04-15 18:13:13.774538] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:24.829 [2024-04-15 18:13:13.774546] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:24.829 [2024-04-15 18:13:13.774554] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x214b1a0) 00:26:24.829 [2024-04-15 18:13:13.774566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.829 [2024-04-15 18:13:13.774590] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4270, cid 0, qid 0 00:26:24.829 [2024-04-15 18:13:13.774767] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:24.829 [2024-04-15 18:13:13.774781] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:24.829 [2024-04-15 18:13:13.774789] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:24.829 [2024-04-15 18:13:13.774801] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4270) on tqpair=0x214b1a0 00:26:24.829 [2024-04-15 18:13:13.774814] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:24.829 [2024-04-15 18:13:13.774833] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:24.829 [2024-04-15 18:13:13.774843] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:24.829 [2024-04-15 18:13:13.774851] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x214b1a0) 00:26:24.829 [2024-04-15 18:13:13.774863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.829 [2024-04-15 18:13:13.774887] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4270, cid 0, qid 0 00:26:24.829 [2024-04-15 18:13:13.775052] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:24.829 [2024-04-15 18:13:13.775075] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:24.829 [2024-04-15 18:13:13.775084] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:24.829 [2024-04-15 18:13:13.775092] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4270) on tqpair=0x214b1a0 00:26:24.829 [2024-04-15 18:13:13.775103] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:24.829 [2024-04-15 18:13:13.775113] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:24.829 [2024-04-15 18:13:13.775129] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:24.829 [2024-04-15 18:13:13.775151] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:24.829 [2024-04-15 18:13:13.775172] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:24.829 [2024-04-15 18:13:13.775182] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x214b1a0) 00:26:24.829 [2024-04-15 18:13:13.775194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.829 [2024-04-15 18:13:13.775218] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4270, cid 0, qid 0 00:26:24.829 [2024-04-15 18:13:13.775445] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:24.829 [2024-04-15 18:13:13.775462] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:24.829 [2024-04-15 18:13:13.775471] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:24.829 [2024-04-15 18:13:13.775480] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x214b1a0): datao=0, datal=4096, cccid=0 00:26:24.829 [2024-04-15 18:13:13.775489] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a4270) on tqpair(0x214b1a0): expected_datao=0, payload_size=4096 00:26:24.829 [2024-04-15 18:13:13.775499] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:24.829 [2024-04-15 18:13:13.775520] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:24.829 [2024-04-15 18:13:13.775532] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.816221] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.091 [2024-04-15 18:13:13.816243] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.091 [2024-04-15 18:13:13.816252] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.816260] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4270) on tqpair=0x214b1a0 00:26:25.091 [2024-04-15 18:13:13.816277] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:25.091 [2024-04-15 18:13:13.816288] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:25.091 [2024-04-15 18:13:13.816303] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:25.091 [2024-04-15 18:13:13.816315] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:25.091 [2024-04-15 18:13:13.816325] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:25.091 [2024-04-15 18:13:13.816335] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:25.091 [2024-04-15 18:13:13.816352] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:25.091 [2024-04-15 18:13:13.816367] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.816376] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.816384] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x214b1a0) 00:26:25.091 [2024-04-15 18:13:13.816398] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:25.091 [2024-04-15 18:13:13.816424] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4270, cid 0, qid 0 00:26:25.091 [2024-04-15 18:13:13.816568] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.091 [2024-04-15 18:13:13.816586] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.091 [2024-04-15 18:13:13.816594] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.816602] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4270) on tqpair=0x214b1a0 00:26:25.091 [2024-04-15 18:13:13.816619] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.816627] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.816635] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x214b1a0) 00:26:25.091 [2024-04-15 18:13:13.816647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.091 [2024-04-15 18:13:13.816659] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.816667] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.816675] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x214b1a0) 00:26:25.091 [2024-04-15 18:13:13.816685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.091 [2024-04-15 18:13:13.816696] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.816704] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.816712] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x214b1a0) 00:26:25.091 [2024-04-15 18:13:13.816722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.091 [2024-04-15 18:13:13.816733] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.816741] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.816749] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.091 [2024-04-15 18:13:13.816759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.091 [2024-04-15 18:13:13.816770] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:25.091 [2024-04-15 18:13:13.816793] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:25.091 [2024-04-15 18:13:13.816811] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.816821] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x214b1a0) 00:26:25.091 [2024-04-15 18:13:13.816833] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 18:13:13.816871] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4270, cid 0, qid 0 00:26:25.091 [2024-04-15 18:13:13.816884] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a43d0, cid 1, qid 0 00:26:25.091 [2024-04-15 18:13:13.816893] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4530, cid 2, qid 0 00:26:25.091 [2024-04-15 18:13:13.816902] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.091 [2024-04-15 18:13:13.816911] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a47f0, cid 4, qid 0 00:26:25.091 [2024-04-15 18:13:13.817122] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.091 [2024-04-15 18:13:13.817138] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.091 [2024-04-15 18:13:13.817146] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.817155] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a47f0) on tqpair=0x214b1a0 00:26:25.091 [2024-04-15 18:13:13.817167] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:25.091 [2024-04-15 18:13:13.817178] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:25.091 [2024-04-15 18:13:13.817198] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.817209] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x214b1a0) 00:26:25.091 [2024-04-15 18:13:13.817221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 18:13:13.817245] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a47f0, cid 4, qid 0 00:26:25.091 [2024-04-15 18:13:13.817439] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.091 [2024-04-15 18:13:13.817454] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.091 [2024-04-15 18:13:13.817462] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.817469] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x214b1a0): datao=0, datal=4096, cccid=4 00:26:25.091 [2024-04-15 18:13:13.817479] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a47f0) on tqpair(0x214b1a0): expected_datao=0, payload_size=4096 00:26:25.091 [2024-04-15 18:13:13.817488] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.817500] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.817509] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.817531] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.091 [2024-04-15 18:13:13.817543] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.091 [2024-04-15 18:13:13.817551] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.817559] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a47f0) on tqpair=0x214b1a0 00:26:25.091 [2024-04-15 18:13:13.817582] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:25.091 [2024-04-15 18:13:13.817618] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.817628] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x214b1a0) 00:26:25.091 [2024-04-15 18:13:13.817641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 18:13:13.817659] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.817668] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.817676] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x214b1a0) 00:26:25.091 [2024-04-15 18:13:13.817687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.091 [2024-04-15 18:13:13.817718] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a47f0, cid 4, qid 0 00:26:25.091 [2024-04-15 18:13:13.817732] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4950, cid 5, qid 0 00:26:25.091 [2024-04-15 18:13:13.817944] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.091 [2024-04-15 18:13:13.817961] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.091 [2024-04-15 18:13:13.817969] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.091 [2024-04-15 18:13:13.817977] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x214b1a0): datao=0, datal=1024, cccid=4 00:26:25.092 [2024-04-15 18:13:13.817986] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a47f0) on tqpair(0x214b1a0): expected_datao=0, payload_size=1024 00:26:25.092 [2024-04-15 18:13:13.817995] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.092 [2024-04-15 18:13:13.818006] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.092 [2024-04-15 18:13:13.818015] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.092 [2024-04-15 18:13:13.818025] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.092 [2024-04-15 18:13:13.818036] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.092 [2024-04-15 18:13:13.818044] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.092 [2024-04-15 18:13:13.818052] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4950) on tqpair=0x214b1a0 00:26:25.092 [2024-04-15 18:13:13.858235] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.092 [2024-04-15 18:13:13.858254] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.092 [2024-04-15 18:13:13.858263] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.092 [2024-04-15 18:13:13.858271] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a47f0) on tqpair=0x214b1a0 00:26:25.092 [2024-04-15 18:13:13.858298] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.092 [2024-04-15 18:13:13.858310] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x214b1a0) 00:26:25.092 [2024-04-15 18:13:13.858323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 18:13:13.858356] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a47f0, cid 4, qid 0 00:26:25.092 [2024-04-15 18:13:13.858523] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.092 [2024-04-15 18:13:13.858537] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.092 [2024-04-15 18:13:13.858545] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.092 [2024-04-15 18:13:13.858553] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x214b1a0): datao=0, datal=3072, cccid=4 00:26:25.092 [2024-04-15 18:13:13.858562] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a47f0) on tqpair(0x214b1a0): expected_datao=0, payload_size=3072 00:26:25.092 [2024-04-15 18:13:13.858571] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.092 [2024-04-15 18:13:13.858583] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.092 [2024-04-15 18:13:13.858592] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.092 [2024-04-15 18:13:13.858615] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.092 [2024-04-15 18:13:13.858627] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.092 [2024-04-15 18:13:13.858635] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.092 [2024-04-15 18:13:13.858648] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a47f0) on tqpair=0x214b1a0 00:26:25.092 [2024-04-15 18:13:13.858667] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.092 [2024-04-15 18:13:13.858676] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x214b1a0) 00:26:25.092 [2024-04-15 18:13:13.858689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 18:13:13.858720] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a47f0, cid 4, qid 0 00:26:25.092 [2024-04-15 18:13:13.858877] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.092 [2024-04-15 18:13:13.858894] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.092 [2024-04-15 18:13:13.858902] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.092 [2024-04-15 18:13:13.858910] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x214b1a0): datao=0, datal=8, cccid=4 00:26:25.092 [2024-04-15 18:13:13.858919] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a47f0) on tqpair(0x214b1a0): expected_datao=0, payload_size=8 00:26:25.092 [2024-04-15 18:13:13.858928] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.092 [2024-04-15 18:13:13.858940] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.092 [2024-04-15 18:13:13.858948] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.092 [2024-04-15 18:13:13.903087] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.092 [2024-04-15 18:13:13.903109] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.092 [2024-04-15 18:13:13.903117] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.092 [2024-04-15 18:13:13.903126] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a47f0) on tqpair=0x214b1a0 00:26:25.092 ===================================================== 00:26:25.092 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:25.092 ===================================================== 00:26:25.092 Controller Capabilities/Features 00:26:25.092 ================================ 00:26:25.092 Vendor ID: 0000 00:26:25.092 Subsystem Vendor ID: 0000 00:26:25.092 Serial Number: .................... 00:26:25.092 Model Number: ........................................ 00:26:25.092 Firmware Version: 24.05 00:26:25.092 Recommended Arb Burst: 0 00:26:25.092 IEEE OUI Identifier: 00 00 00 00:26:25.092 Multi-path I/O 00:26:25.092 May have multiple subsystem ports: No 00:26:25.092 May have multiple controllers: No 00:26:25.092 Associated with SR-IOV VF: No 00:26:25.092 Max Data Transfer Size: 131072 00:26:25.092 Max Number of Namespaces: 0 00:26:25.092 Max Number of I/O Queues: 1024 00:26:25.092 NVMe Specification Version (VS): 1.3 00:26:25.092 NVMe Specification Version (Identify): 1.3 00:26:25.092 Maximum Queue Entries: 128 00:26:25.092 Contiguous Queues Required: Yes 00:26:25.092 Arbitration Mechanisms Supported 00:26:25.092 Weighted Round Robin: Not Supported 00:26:25.092 Vendor Specific: Not Supported 00:26:25.092 Reset Timeout: 15000 ms 00:26:25.092 Doorbell Stride: 4 bytes 00:26:25.092 NVM Subsystem Reset: Not Supported 00:26:25.092 Command Sets Supported 00:26:25.092 NVM Command Set: Supported 00:26:25.092 Boot Partition: Not Supported 00:26:25.092 Memory Page Size Minimum: 4096 bytes 00:26:25.092 Memory Page Size Maximum: 4096 bytes 00:26:25.092 Persistent Memory Region: Not Supported 00:26:25.092 Optional Asynchronous Events Supported 00:26:25.092 Namespace Attribute Notices: Not Supported 00:26:25.092 Firmware Activation Notices: Not Supported 00:26:25.092 ANA Change Notices: Not Supported 00:26:25.092 PLE Aggregate Log Change Notices: Not Supported 00:26:25.092 LBA Status Info Alert Notices: Not Supported 00:26:25.092 EGE Aggregate Log Change Notices: Not Supported 00:26:25.092 Normal NVM Subsystem Shutdown event: Not Supported 00:26:25.092 Zone Descriptor Change Notices: Not Supported 00:26:25.092 Discovery Log Change Notices: Supported 00:26:25.092 Controller Attributes 00:26:25.092 128-bit Host Identifier: Not Supported 00:26:25.092 Non-Operational Permissive Mode: Not Supported 00:26:25.092 NVM Sets: Not Supported 00:26:25.092 Read Recovery Levels: Not Supported 00:26:25.092 Endurance Groups: Not Supported 00:26:25.092 Predictable Latency Mode: Not Supported 00:26:25.092 Traffic Based Keep ALive: Not Supported 00:26:25.092 Namespace Granularity: Not Supported 00:26:25.092 SQ Associations: Not Supported 00:26:25.092 UUID List: Not Supported 00:26:25.092 Multi-Domain Subsystem: Not Supported 00:26:25.092 Fixed Capacity Management: Not Supported 00:26:25.092 Variable Capacity Management: Not Supported 00:26:25.092 Delete Endurance Group: Not Supported 00:26:25.092 Delete NVM Set: Not Supported 00:26:25.092 Extended LBA Formats Supported: Not Supported 00:26:25.092 Flexible Data Placement Supported: Not Supported 00:26:25.092 00:26:25.092 Controller Memory Buffer Support 00:26:25.092 ================================ 00:26:25.092 Supported: No 00:26:25.092 00:26:25.092 Persistent Memory Region Support 00:26:25.092 ================================ 00:26:25.092 Supported: No 00:26:25.092 00:26:25.092 Admin Command Set Attributes 00:26:25.092 ============================ 00:26:25.092 Security Send/Receive: Not Supported 00:26:25.092 Format NVM: Not Supported 00:26:25.092 Firmware Activate/Download: Not Supported 00:26:25.092 Namespace Management: Not Supported 00:26:25.092 Device Self-Test: Not Supported 00:26:25.092 Directives: Not Supported 00:26:25.092 NVMe-MI: Not Supported 00:26:25.092 Virtualization Management: Not Supported 00:26:25.092 Doorbell Buffer Config: Not Supported 00:26:25.092 Get LBA Status Capability: Not Supported 00:26:25.092 Command & Feature Lockdown Capability: Not Supported 00:26:25.092 Abort Command Limit: 1 00:26:25.092 Async Event Request Limit: 4 00:26:25.092 Number of Firmware Slots: N/A 00:26:25.092 Firmware Slot 1 Read-Only: N/A 00:26:25.092 Firmware Activation Without Reset: N/A 00:26:25.092 Multiple Update Detection Support: N/A 00:26:25.092 Firmware Update Granularity: No Information Provided 00:26:25.092 Per-Namespace SMART Log: No 00:26:25.092 Asymmetric Namespace Access Log Page: Not Supported 00:26:25.092 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:25.092 Command Effects Log Page: Not Supported 00:26:25.092 Get Log Page Extended Data: Supported 00:26:25.092 Telemetry Log Pages: Not Supported 00:26:25.092 Persistent Event Log Pages: Not Supported 00:26:25.092 Supported Log Pages Log Page: May Support 00:26:25.092 Commands Supported & Effects Log Page: Not Supported 00:26:25.092 Feature Identifiers & Effects Log Page:May Support 00:26:25.092 NVMe-MI Commands & Effects Log Page: May Support 00:26:25.092 Data Area 4 for Telemetry Log: Not Supported 00:26:25.092 Error Log Page Entries Supported: 128 00:26:25.092 Keep Alive: Not Supported 00:26:25.092 00:26:25.092 NVM Command Set Attributes 00:26:25.092 ========================== 00:26:25.092 Submission Queue Entry Size 00:26:25.092 Max: 1 00:26:25.092 Min: 1 00:26:25.092 Completion Queue Entry Size 00:26:25.092 Max: 1 00:26:25.092 Min: 1 00:26:25.092 Number of Namespaces: 0 00:26:25.092 Compare Command: Not Supported 00:26:25.092 Write Uncorrectable Command: Not Supported 00:26:25.093 Dataset Management Command: Not Supported 00:26:25.093 Write Zeroes Command: Not Supported 00:26:25.093 Set Features Save Field: Not Supported 00:26:25.093 Reservations: Not Supported 00:26:25.093 Timestamp: Not Supported 00:26:25.093 Copy: Not Supported 00:26:25.093 Volatile Write Cache: Not Present 00:26:25.093 Atomic Write Unit (Normal): 1 00:26:25.093 Atomic Write Unit (PFail): 1 00:26:25.093 Atomic Compare & Write Unit: 1 00:26:25.093 Fused Compare & Write: Supported 00:26:25.093 Scatter-Gather List 00:26:25.093 SGL Command Set: Supported 00:26:25.093 SGL Keyed: Supported 00:26:25.093 SGL Bit Bucket Descriptor: Not Supported 00:26:25.093 SGL Metadata Pointer: Not Supported 00:26:25.093 Oversized SGL: Not Supported 00:26:25.093 SGL Metadata Address: Not Supported 00:26:25.093 SGL Offset: Supported 00:26:25.093 Transport SGL Data Block: Not Supported 00:26:25.093 Replay Protected Memory Block: Not Supported 00:26:25.093 00:26:25.093 Firmware Slot Information 00:26:25.093 ========================= 00:26:25.093 Active slot: 0 00:26:25.093 00:26:25.093 00:26:25.093 Error Log 00:26:25.093 ========= 00:26:25.093 00:26:25.093 Active Namespaces 00:26:25.093 ================= 00:26:25.093 Discovery Log Page 00:26:25.093 ================== 00:26:25.093 Generation Counter: 2 00:26:25.093 Number of Records: 2 00:26:25.093 Record Format: 0 00:26:25.093 00:26:25.093 Discovery Log Entry 0 00:26:25.093 ---------------------- 00:26:25.093 Transport Type: 3 (TCP) 00:26:25.093 Address Family: 1 (IPv4) 00:26:25.093 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:25.093 Entry Flags: 00:26:25.093 Duplicate Returned Information: 1 00:26:25.093 Explicit Persistent Connection Support for Discovery: 1 00:26:25.093 Transport Requirements: 00:26:25.093 Secure Channel: Not Required 00:26:25.093 Port ID: 0 (0x0000) 00:26:25.093 Controller ID: 65535 (0xffff) 00:26:25.093 Admin Max SQ Size: 128 00:26:25.093 Transport Service Identifier: 4420 00:26:25.093 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:25.093 Transport Address: 10.0.0.2 00:26:25.093 Discovery Log Entry 1 00:26:25.093 ---------------------- 00:26:25.093 Transport Type: 3 (TCP) 00:26:25.093 Address Family: 1 (IPv4) 00:26:25.093 Subsystem Type: 2 (NVM Subsystem) 00:26:25.093 Entry Flags: 00:26:25.093 Duplicate Returned Information: 0 00:26:25.093 Explicit Persistent Connection Support for Discovery: 0 00:26:25.093 Transport Requirements: 00:26:25.093 Secure Channel: Not Required 00:26:25.093 Port ID: 0 (0x0000) 00:26:25.093 Controller ID: 65535 (0xffff) 00:26:25.093 Admin Max SQ Size: 128 00:26:25.093 Transport Service Identifier: 4420 00:26:25.093 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:25.093 Transport Address: 10.0.0.2 [2024-04-15 18:13:13.903260] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:25.093 [2024-04-15 18:13:13.903290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 18:13:13.903304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 18:13:13.903316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 18:13:13.903327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 18:13:13.903343] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.903353] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.903361] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.093 [2024-04-15 18:13:13.903374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 18:13:13.903403] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.093 [2024-04-15 18:13:13.903594] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.093 [2024-04-15 18:13:13.903612] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.093 [2024-04-15 18:13:13.903620] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.903628] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.093 [2024-04-15 18:13:13.903644] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.903653] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.903661] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.093 [2024-04-15 18:13:13.903674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 18:13:13.903709] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.093 [2024-04-15 18:13:13.903893] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.093 [2024-04-15 18:13:13.903907] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.093 [2024-04-15 18:13:13.903915] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.903923] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.093 [2024-04-15 18:13:13.903935] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:25.093 [2024-04-15 18:13:13.903946] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:25.093 [2024-04-15 18:13:13.903964] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.903975] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.903983] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.093 [2024-04-15 18:13:13.903995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 18:13:13.904018] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.093 [2024-04-15 18:13:13.904196] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.093 [2024-04-15 18:13:13.904212] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.093 [2024-04-15 18:13:13.904220] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.904228] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.093 [2024-04-15 18:13:13.904249] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.904260] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.904267] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.093 [2024-04-15 18:13:13.904280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 18:13:13.904303] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.093 [2024-04-15 18:13:13.904468] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.093 [2024-04-15 18:13:13.904485] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.093 [2024-04-15 18:13:13.904493] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.904501] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.093 [2024-04-15 18:13:13.904522] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.904533] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.904541] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.093 [2024-04-15 18:13:13.904553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 18:13:13.904577] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.093 [2024-04-15 18:13:13.904746] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.093 [2024-04-15 18:13:13.904760] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.093 [2024-04-15 18:13:13.904768] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.904776] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.093 [2024-04-15 18:13:13.904796] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.904806] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.904818] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.093 [2024-04-15 18:13:13.904831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 18:13:13.904855] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.093 [2024-04-15 18:13:13.905019] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.093 [2024-04-15 18:13:13.905032] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.093 [2024-04-15 18:13:13.905040] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.905048] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.093 [2024-04-15 18:13:13.905082] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.905095] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.093 [2024-04-15 18:13:13.905103] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.093 [2024-04-15 18:13:13.905115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 18:13:13.905139] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.093 [2024-04-15 18:13:13.905308] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.094 [2024-04-15 18:13:13.905322] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.094 [2024-04-15 18:13:13.905330] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.905338] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.094 [2024-04-15 18:13:13.905358] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.905368] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.905376] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.094 [2024-04-15 18:13:13.905388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 18:13:13.905412] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.094 [2024-04-15 18:13:13.905583] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.094 [2024-04-15 18:13:13.905597] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.094 [2024-04-15 18:13:13.905605] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.905613] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.094 [2024-04-15 18:13:13.905633] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.905643] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.905651] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.094 [2024-04-15 18:13:13.905663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 18:13:13.905686] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.094 [2024-04-15 18:13:13.905855] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.094 [2024-04-15 18:13:13.905873] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.094 [2024-04-15 18:13:13.905881] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.905889] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.094 [2024-04-15 18:13:13.905909] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.905920] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.905928] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.094 [2024-04-15 18:13:13.905944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 18:13:13.905969] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.094 [2024-04-15 18:13:13.906162] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.094 [2024-04-15 18:13:13.906180] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.094 [2024-04-15 18:13:13.906189] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.906197] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.094 [2024-04-15 18:13:13.906218] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.906229] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.906236] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.094 [2024-04-15 18:13:13.906249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 18:13:13.906273] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.094 [2024-04-15 18:13:13.906443] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.094 [2024-04-15 18:13:13.906457] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.094 [2024-04-15 18:13:13.906465] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.906472] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.094 [2024-04-15 18:13:13.906492] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.906502] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.906510] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.094 [2024-04-15 18:13:13.906523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 18:13:13.906546] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.094 [2024-04-15 18:13:13.906705] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.094 [2024-04-15 18:13:13.906723] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.094 [2024-04-15 18:13:13.906731] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.906739] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.094 [2024-04-15 18:13:13.906759] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.906770] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.906777] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.094 [2024-04-15 18:13:13.906790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 18:13:13.906813] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.094 [2024-04-15 18:13:13.907008] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.094 [2024-04-15 18:13:13.907022] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.094 [2024-04-15 18:13:13.907030] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.907038] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.094 [2024-04-15 18:13:13.907066] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.907078] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.907086] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.094 [2024-04-15 18:13:13.907103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 18:13:13.907128] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.094 [2024-04-15 18:13:13.907299] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.094 [2024-04-15 18:13:13.907313] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.094 [2024-04-15 18:13:13.907321] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.907329] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.094 [2024-04-15 18:13:13.907348] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.907359] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.907367] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.094 [2024-04-15 18:13:13.907379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 18:13:13.907402] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.094 [2024-04-15 18:13:13.907562] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.094 [2024-04-15 18:13:13.907580] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.094 [2024-04-15 18:13:13.907588] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.907596] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.094 [2024-04-15 18:13:13.907616] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.907626] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.907634] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.094 [2024-04-15 18:13:13.907647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 18:13:13.907670] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.094 [2024-04-15 18:13:13.907865] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.094 [2024-04-15 18:13:13.907879] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.094 [2024-04-15 18:13:13.907887] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.907895] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.094 [2024-04-15 18:13:13.907914] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.907925] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.907932] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.094 [2024-04-15 18:13:13.907944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 18:13:13.907968] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.094 [2024-04-15 18:13:13.908140] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.094 [2024-04-15 18:13:13.908158] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.094 [2024-04-15 18:13:13.908166] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.908174] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.094 [2024-04-15 18:13:13.908194] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.908204] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.094 [2024-04-15 18:13:13.908212] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.094 [2024-04-15 18:13:13.908225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 18:13:13.908254] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.094 [2024-04-15 18:13:13.908417] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.095 [2024-04-15 18:13:13.908434] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.095 [2024-04-15 18:13:13.908442] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.908450] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.095 [2024-04-15 18:13:13.908470] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.908481] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.908489] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.095 [2024-04-15 18:13:13.908501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 18:13:13.908525] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.095 [2024-04-15 18:13:13.908691] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.095 [2024-04-15 18:13:13.908709] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.095 [2024-04-15 18:13:13.908717] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.908725] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.095 [2024-04-15 18:13:13.908745] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.908756] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.908764] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.095 [2024-04-15 18:13:13.908776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 18:13:13.908800] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.095 [2024-04-15 18:13:13.908973] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.095 [2024-04-15 18:13:13.908987] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.095 [2024-04-15 18:13:13.908995] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.909003] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.095 [2024-04-15 18:13:13.909022] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.909033] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.909041] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.095 [2024-04-15 18:13:13.909053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 18:13:13.909086] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.095 [2024-04-15 18:13:13.909248] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.095 [2024-04-15 18:13:13.909261] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.095 [2024-04-15 18:13:13.909269] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.909277] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.095 [2024-04-15 18:13:13.909297] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.909307] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.909315] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.095 [2024-04-15 18:13:13.909328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 18:13:13.909355] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.095 [2024-04-15 18:13:13.909521] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.095 [2024-04-15 18:13:13.909535] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.095 [2024-04-15 18:13:13.909543] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.909551] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.095 [2024-04-15 18:13:13.909570] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.909581] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.909589] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.095 [2024-04-15 18:13:13.909601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 18:13:13.909624] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.095 [2024-04-15 18:13:13.909798] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.095 [2024-04-15 18:13:13.909815] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.095 [2024-04-15 18:13:13.909823] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.909831] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.095 [2024-04-15 18:13:13.909850] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.909861] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.909868] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.095 [2024-04-15 18:13:13.909881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 18:13:13.909904] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.095 [2024-04-15 18:13:13.914072] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.095 [2024-04-15 18:13:13.914092] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.095 [2024-04-15 18:13:13.914100] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.914108] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.095 [2024-04-15 18:13:13.914131] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.914142] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.914151] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x214b1a0) 00:26:25.095 [2024-04-15 18:13:13.914163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 18:13:13.914196] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a4690, cid 3, qid 0 00:26:25.095 [2024-04-15 18:13:13.914372] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.095 [2024-04-15 18:13:13.914389] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.095 [2024-04-15 18:13:13.914397] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.095 [2024-04-15 18:13:13.914405] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21a4690) on tqpair=0x214b1a0 00:26:25.095 [2024-04-15 18:13:13.914423] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 10 milliseconds 00:26:25.095 00:26:25.095 18:13:13 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:25.095 [2024-04-15 18:13:13.964942] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:26:25.095 [2024-04-15 18:13:13.965071] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405140 ] 00:26:25.095 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.095 [2024-04-15 18:13:14.016792] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:25.095 [2024-04-15 18:13:14.016840] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:25.095 [2024-04-15 18:13:14.016850] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:25.096 [2024-04-15 18:13:14.016864] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:25.096 [2024-04-15 18:13:14.016876] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:25.096 [2024-04-15 18:13:14.017177] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:25.096 [2024-04-15 18:13:14.017220] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x6631a0 0 00:26:25.096 [2024-04-15 18:13:14.024077] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:25.096 [2024-04-15 18:13:14.024096] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:25.096 [2024-04-15 18:13:14.024105] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:25.096 [2024-04-15 18:13:14.024111] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:25.096 [2024-04-15 18:13:14.024148] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.024176] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.024183] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6631a0) 00:26:25.096 [2024-04-15 18:13:14.024198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:25.096 [2024-04-15 18:13:14.024224] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc270, cid 0, qid 0 00:26:25.096 [2024-04-15 18:13:14.032073] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.096 [2024-04-15 18:13:14.032091] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.096 [2024-04-15 18:13:14.032099] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.032107] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc270) on tqpair=0x6631a0 00:26:25.096 [2024-04-15 18:13:14.032122] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:25.096 [2024-04-15 18:13:14.032132] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:25.096 [2024-04-15 18:13:14.032142] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:25.096 [2024-04-15 18:13:14.032163] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.032173] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.032180] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6631a0) 00:26:25.096 [2024-04-15 18:13:14.032192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 18:13:14.032217] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc270, cid 0, qid 0 00:26:25.096 [2024-04-15 18:13:14.032428] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.096 [2024-04-15 18:13:14.032444] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.096 [2024-04-15 18:13:14.032450] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.032457] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc270) on tqpair=0x6631a0 00:26:25.096 [2024-04-15 18:13:14.032470] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:25.096 [2024-04-15 18:13:14.032485] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:25.096 [2024-04-15 18:13:14.032497] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.032504] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.032511] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6631a0) 00:26:25.096 [2024-04-15 18:13:14.032521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 18:13:14.032543] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc270, cid 0, qid 0 00:26:25.096 [2024-04-15 18:13:14.032730] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.096 [2024-04-15 18:13:14.032744] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.096 [2024-04-15 18:13:14.032751] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.032758] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc270) on tqpair=0x6631a0 00:26:25.096 [2024-04-15 18:13:14.032767] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:25.096 [2024-04-15 18:13:14.032781] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:25.096 [2024-04-15 18:13:14.032793] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.032801] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.032807] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6631a0) 00:26:25.096 [2024-04-15 18:13:14.032818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 18:13:14.032847] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc270, cid 0, qid 0 00:26:25.096 [2024-04-15 18:13:14.032977] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.096 [2024-04-15 18:13:14.032991] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.096 [2024-04-15 18:13:14.032998] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.033005] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc270) on tqpair=0x6631a0 00:26:25.096 [2024-04-15 18:13:14.033014] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:25.096 [2024-04-15 18:13:14.033031] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.033055] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.033073] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6631a0) 00:26:25.096 [2024-04-15 18:13:14.033085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 18:13:14.033108] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc270, cid 0, qid 0 00:26:25.096 [2024-04-15 18:13:14.033261] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.096 [2024-04-15 18:13:14.033276] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.096 [2024-04-15 18:13:14.033283] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.033291] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc270) on tqpair=0x6631a0 00:26:25.096 [2024-04-15 18:13:14.033299] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:25.096 [2024-04-15 18:13:14.033308] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:25.096 [2024-04-15 18:13:14.033327] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:25.096 [2024-04-15 18:13:14.033438] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:25.096 [2024-04-15 18:13:14.033446] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:25.096 [2024-04-15 18:13:14.033458] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.033466] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.033472] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6631a0) 00:26:25.096 [2024-04-15 18:13:14.033483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 18:13:14.033504] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc270, cid 0, qid 0 00:26:25.096 [2024-04-15 18:13:14.033694] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.096 [2024-04-15 18:13:14.033709] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.096 [2024-04-15 18:13:14.033716] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.033723] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc270) on tqpair=0x6631a0 00:26:25.096 [2024-04-15 18:13:14.033731] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:25.096 [2024-04-15 18:13:14.033748] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.033756] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.033763] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6631a0) 00:26:25.096 [2024-04-15 18:13:14.033774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 18:13:14.033804] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc270, cid 0, qid 0 00:26:25.096 [2024-04-15 18:13:14.033954] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.096 [2024-04-15 18:13:14.033967] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.096 [2024-04-15 18:13:14.033974] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.033981] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc270) on tqpair=0x6631a0 00:26:25.096 [2024-04-15 18:13:14.033988] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:25.096 [2024-04-15 18:13:14.033996] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:25.096 [2024-04-15 18:13:14.034010] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:25.096 [2024-04-15 18:13:14.034027] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:25.096 [2024-04-15 18:13:14.034065] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.034076] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6631a0) 00:26:25.096 [2024-04-15 18:13:14.034088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 18:13:14.034111] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc270, cid 0, qid 0 00:26:25.096 [2024-04-15 18:13:14.034366] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.096 [2024-04-15 18:13:14.034378] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.096 [2024-04-15 18:13:14.034388] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.034395] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6631a0): datao=0, datal=4096, cccid=0 00:26:25.096 [2024-04-15 18:13:14.034403] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bc270) on tqpair(0x6631a0): expected_datao=0, payload_size=4096 00:26:25.096 [2024-04-15 18:13:14.034411] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.034428] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.096 [2024-04-15 18:13:14.034437] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.358 [2024-04-15 18:13:14.078072] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.358 [2024-04-15 18:13:14.078092] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.358 [2024-04-15 18:13:14.078100] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.358 [2024-04-15 18:13:14.078107] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc270) on tqpair=0x6631a0 00:26:25.358 [2024-04-15 18:13:14.078119] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:25.358 [2024-04-15 18:13:14.078128] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:25.358 [2024-04-15 18:13:14.078136] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:25.358 [2024-04-15 18:13:14.078144] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:25.358 [2024-04-15 18:13:14.078152] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:25.358 [2024-04-15 18:13:14.078160] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:25.358 [2024-04-15 18:13:14.078176] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:25.358 [2024-04-15 18:13:14.078188] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.358 [2024-04-15 18:13:14.078196] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.358 [2024-04-15 18:13:14.078203] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6631a0) 00:26:25.358 [2024-04-15 18:13:14.078215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:25.358 [2024-04-15 18:13:14.078239] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc270, cid 0, qid 0 00:26:25.358 [2024-04-15 18:13:14.078437] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.358 [2024-04-15 18:13:14.078453] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.358 [2024-04-15 18:13:14.078460] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.358 [2024-04-15 18:13:14.078467] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc270) on tqpair=0x6631a0 00:26:25.358 [2024-04-15 18:13:14.078477] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.358 [2024-04-15 18:13:14.078485] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.358 [2024-04-15 18:13:14.078492] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6631a0) 00:26:25.358 [2024-04-15 18:13:14.078502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.358 [2024-04-15 18:13:14.078512] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.358 [2024-04-15 18:13:14.078518] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.358 [2024-04-15 18:13:14.078525] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x6631a0) 00:26:25.358 [2024-04-15 18:13:14.078534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.358 [2024-04-15 18:13:14.078550] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.358 [2024-04-15 18:13:14.078557] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.358 [2024-04-15 18:13:14.078564] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x6631a0) 00:26:25.358 [2024-04-15 18:13:14.078573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.358 [2024-04-15 18:13:14.078582] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.358 [2024-04-15 18:13:14.078589] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.358 [2024-04-15 18:13:14.078595] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6631a0) 00:26:25.358 [2024-04-15 18:13:14.078604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.358 [2024-04-15 18:13:14.078613] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:25.358 [2024-04-15 18:13:14.078632] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:25.358 [2024-04-15 18:13:14.078644] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.358 [2024-04-15 18:13:14.078652] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6631a0) 00:26:25.358 [2024-04-15 18:13:14.078662] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-15 18:13:14.078685] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc270, cid 0, qid 0 00:26:25.358 [2024-04-15 18:13:14.078695] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc3d0, cid 1, qid 0 00:26:25.358 [2024-04-15 18:13:14.078703] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc530, cid 2, qid 0 00:26:25.358 [2024-04-15 18:13:14.078711] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc690, cid 3, qid 0 00:26:25.358 [2024-04-15 18:13:14.078719] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc7f0, cid 4, qid 0 00:26:25.358 [2024-04-15 18:13:14.078937] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.358 [2024-04-15 18:13:14.078952] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.358 [2024-04-15 18:13:14.078959] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.358 [2024-04-15 18:13:14.078966] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc7f0) on tqpair=0x6631a0 00:26:25.358 [2024-04-15 18:13:14.078974] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:25.358 [2024-04-15 18:13:14.078983] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:25.358 [2024-04-15 18:13:14.079005] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:25.358 [2024-04-15 18:13:14.079019] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:25.358 [2024-04-15 18:13:14.079030] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.358 [2024-04-15 18:13:14.079052] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.358 [2024-04-15 18:13:14.079067] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6631a0) 00:26:25.359 [2024-04-15 18:13:14.079079] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:25.359 [2024-04-15 18:13:14.079113] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc7f0, cid 4, qid 0 00:26:25.359 [2024-04-15 18:13:14.079270] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.359 [2024-04-15 18:13:14.079289] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.359 [2024-04-15 18:13:14.079297] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.079304] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc7f0) on tqpair=0x6631a0 00:26:25.359 [2024-04-15 18:13:14.079370] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:25.359 [2024-04-15 18:13:14.079391] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:25.359 [2024-04-15 18:13:14.079406] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.079414] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6631a0) 00:26:25.359 [2024-04-15 18:13:14.079424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.359 [2024-04-15 18:13:14.079445] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc7f0, cid 4, qid 0 00:26:25.359 [2024-04-15 18:13:14.079656] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.359 [2024-04-15 18:13:14.079671] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.359 [2024-04-15 18:13:14.079678] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.079685] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6631a0): datao=0, datal=4096, cccid=4 00:26:25.359 [2024-04-15 18:13:14.079693] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bc7f0) on tqpair(0x6631a0): expected_datao=0, payload_size=4096 00:26:25.359 [2024-04-15 18:13:14.079701] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.079711] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.079718] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.079753] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.359 [2024-04-15 18:13:14.079764] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.359 [2024-04-15 18:13:14.079770] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.079777] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc7f0) on tqpair=0x6631a0 00:26:25.359 [2024-04-15 18:13:14.079793] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:25.359 [2024-04-15 18:13:14.079810] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:25.359 [2024-04-15 18:13:14.079827] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:25.359 [2024-04-15 18:13:14.079840] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.079847] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6631a0) 00:26:25.359 [2024-04-15 18:13:14.079858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.359 [2024-04-15 18:13:14.079879] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc7f0, cid 4, qid 0 00:26:25.359 [2024-04-15 18:13:14.080150] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.359 [2024-04-15 18:13:14.080167] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.359 [2024-04-15 18:13:14.080174] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.080181] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6631a0): datao=0, datal=4096, cccid=4 00:26:25.359 [2024-04-15 18:13:14.080189] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bc7f0) on tqpair(0x6631a0): expected_datao=0, payload_size=4096 00:26:25.359 [2024-04-15 18:13:14.080197] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.080211] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.080220] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.080238] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.359 [2024-04-15 18:13:14.080249] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.359 [2024-04-15 18:13:14.080256] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.080263] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc7f0) on tqpair=0x6631a0 00:26:25.359 [2024-04-15 18:13:14.080286] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:25.359 [2024-04-15 18:13:14.080305] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:25.359 [2024-04-15 18:13:14.080319] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.080326] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6631a0) 00:26:25.359 [2024-04-15 18:13:14.080351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.359 [2024-04-15 18:13:14.080374] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc7f0, cid 4, qid 0 00:26:25.359 [2024-04-15 18:13:14.080556] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.359 [2024-04-15 18:13:14.080571] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.359 [2024-04-15 18:13:14.080578] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.080584] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6631a0): datao=0, datal=4096, cccid=4 00:26:25.359 [2024-04-15 18:13:14.080592] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bc7f0) on tqpair(0x6631a0): expected_datao=0, payload_size=4096 00:26:25.359 [2024-04-15 18:13:14.080600] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.080610] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.080617] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.080646] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.359 [2024-04-15 18:13:14.080657] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.359 [2024-04-15 18:13:14.080664] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.080671] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc7f0) on tqpair=0x6631a0 00:26:25.359 [2024-04-15 18:13:14.080685] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:25.359 [2024-04-15 18:13:14.080700] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:25.359 [2024-04-15 18:13:14.080716] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:25.359 [2024-04-15 18:13:14.080727] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:25.359 [2024-04-15 18:13:14.080736] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:25.359 [2024-04-15 18:13:14.080746] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:25.359 [2024-04-15 18:13:14.080754] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:25.359 [2024-04-15 18:13:14.080763] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:25.359 [2024-04-15 18:13:14.080781] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.080795] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6631a0) 00:26:25.359 [2024-04-15 18:13:14.080806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.359 [2024-04-15 18:13:14.080817] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.080824] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.080831] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6631a0) 00:26:25.359 [2024-04-15 18:13:14.080840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.359 [2024-04-15 18:13:14.080864] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc7f0, cid 4, qid 0 00:26:25.359 [2024-04-15 18:13:14.080875] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc950, cid 5, qid 0 00:26:25.359 [2024-04-15 18:13:14.081034] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.359 [2024-04-15 18:13:14.081072] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.359 [2024-04-15 18:13:14.081081] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.081088] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc7f0) on tqpair=0x6631a0 00:26:25.359 [2024-04-15 18:13:14.081098] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.359 [2024-04-15 18:13:14.081108] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.359 [2024-04-15 18:13:14.081115] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.081122] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc950) on tqpair=0x6631a0 00:26:25.359 [2024-04-15 18:13:14.081139] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.081148] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6631a0) 00:26:25.359 [2024-04-15 18:13:14.081159] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.359 [2024-04-15 18:13:14.081180] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc950, cid 5, qid 0 00:26:25.359 [2024-04-15 18:13:14.081389] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.359 [2024-04-15 18:13:14.081401] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.359 [2024-04-15 18:13:14.081408] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.081415] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc950) on tqpair=0x6631a0 00:26:25.359 [2024-04-15 18:13:14.081431] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.081440] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6631a0) 00:26:25.359 [2024-04-15 18:13:14.081450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.359 [2024-04-15 18:13:14.081470] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc950, cid 5, qid 0 00:26:25.359 [2024-04-15 18:13:14.081621] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.359 [2024-04-15 18:13:14.081635] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.359 [2024-04-15 18:13:14.081642] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.081649] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc950) on tqpair=0x6631a0 00:26:25.359 [2024-04-15 18:13:14.081665] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.081673] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6631a0) 00:26:25.359 [2024-04-15 18:13:14.081684] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.359 [2024-04-15 18:13:14.081708] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc950, cid 5, qid 0 00:26:25.359 [2024-04-15 18:13:14.081821] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.359 [2024-04-15 18:13:14.081836] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.359 [2024-04-15 18:13:14.081843] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.081849] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc950) on tqpair=0x6631a0 00:26:25.359 [2024-04-15 18:13:14.081868] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.081878] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6631a0) 00:26:25.359 [2024-04-15 18:13:14.081888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.359 [2024-04-15 18:13:14.081900] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.081907] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6631a0) 00:26:25.359 [2024-04-15 18:13:14.081917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.359 [2024-04-15 18:13:14.081927] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.081935] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x6631a0) 00:26:25.359 [2024-04-15 18:13:14.081944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.359 [2024-04-15 18:13:14.081955] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.081963] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x6631a0) 00:26:25.359 [2024-04-15 18:13:14.081972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.359 [2024-04-15 18:13:14.081993] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc950, cid 5, qid 0 00:26:25.359 [2024-04-15 18:13:14.082004] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc7f0, cid 4, qid 0 00:26:25.359 [2024-04-15 18:13:14.082011] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bcab0, cid 6, qid 0 00:26:25.359 [2024-04-15 18:13:14.082019] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bcc10, cid 7, qid 0 00:26:25.359 [2024-04-15 18:13:14.086078] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.359 [2024-04-15 18:13:14.086094] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.359 [2024-04-15 18:13:14.086101] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086108] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6631a0): datao=0, datal=8192, cccid=5 00:26:25.359 [2024-04-15 18:13:14.086116] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bc950) on tqpair(0x6631a0): expected_datao=0, payload_size=8192 00:26:25.359 [2024-04-15 18:13:14.086124] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086135] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086143] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086152] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.359 [2024-04-15 18:13:14.086161] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.359 [2024-04-15 18:13:14.086168] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086175] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6631a0): datao=0, datal=512, cccid=4 00:26:25.359 [2024-04-15 18:13:14.086183] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bc7f0) on tqpair(0x6631a0): expected_datao=0, payload_size=512 00:26:25.359 [2024-04-15 18:13:14.086194] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086204] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086212] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086220] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.359 [2024-04-15 18:13:14.086229] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.359 [2024-04-15 18:13:14.086236] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086243] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6631a0): datao=0, datal=512, cccid=6 00:26:25.359 [2024-04-15 18:13:14.086251] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bcab0) on tqpair(0x6631a0): expected_datao=0, payload_size=512 00:26:25.359 [2024-04-15 18:13:14.086259] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086268] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086275] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086284] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.359 [2024-04-15 18:13:14.086293] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.359 [2024-04-15 18:13:14.086300] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086307] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6631a0): datao=0, datal=4096, cccid=7 00:26:25.359 [2024-04-15 18:13:14.086315] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bcc10) on tqpair(0x6631a0): expected_datao=0, payload_size=4096 00:26:25.359 [2024-04-15 18:13:14.086323] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086332] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086354] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086363] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.359 [2024-04-15 18:13:14.086372] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.359 [2024-04-15 18:13:14.086378] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086385] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc950) on tqpair=0x6631a0 00:26:25.359 [2024-04-15 18:13:14.086404] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.359 [2024-04-15 18:13:14.086416] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.359 [2024-04-15 18:13:14.086422] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086429] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc7f0) on tqpair=0x6631a0 00:26:25.359 [2024-04-15 18:13:14.086443] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.359 [2024-04-15 18:13:14.086453] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.359 [2024-04-15 18:13:14.086459] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086466] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bcab0) on tqpair=0x6631a0 00:26:25.359 [2024-04-15 18:13:14.086477] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.359 [2024-04-15 18:13:14.086486] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.359 [2024-04-15 18:13:14.086493] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.359 [2024-04-15 18:13:14.086499] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bcc10) on tqpair=0x6631a0 00:26:25.359 ===================================================== 00:26:25.359 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:25.359 ===================================================== 00:26:25.359 Controller Capabilities/Features 00:26:25.359 ================================ 00:26:25.359 Vendor ID: 8086 00:26:25.359 Subsystem Vendor ID: 8086 00:26:25.359 Serial Number: SPDK00000000000001 00:26:25.359 Model Number: SPDK bdev Controller 00:26:25.359 Firmware Version: 24.05 00:26:25.359 Recommended Arb Burst: 6 00:26:25.359 IEEE OUI Identifier: e4 d2 5c 00:26:25.359 Multi-path I/O 00:26:25.359 May have multiple subsystem ports: Yes 00:26:25.359 May have multiple controllers: Yes 00:26:25.359 Associated with SR-IOV VF: No 00:26:25.359 Max Data Transfer Size: 131072 00:26:25.359 Max Number of Namespaces: 32 00:26:25.359 Max Number of I/O Queues: 127 00:26:25.359 NVMe Specification Version (VS): 1.3 00:26:25.359 NVMe Specification Version (Identify): 1.3 00:26:25.359 Maximum Queue Entries: 128 00:26:25.359 Contiguous Queues Required: Yes 00:26:25.359 Arbitration Mechanisms Supported 00:26:25.359 Weighted Round Robin: Not Supported 00:26:25.359 Vendor Specific: Not Supported 00:26:25.359 Reset Timeout: 15000 ms 00:26:25.359 Doorbell Stride: 4 bytes 00:26:25.359 NVM Subsystem Reset: Not Supported 00:26:25.359 Command Sets Supported 00:26:25.359 NVM Command Set: Supported 00:26:25.359 Boot Partition: Not Supported 00:26:25.359 Memory Page Size Minimum: 4096 bytes 00:26:25.359 Memory Page Size Maximum: 4096 bytes 00:26:25.359 Persistent Memory Region: Not Supported 00:26:25.359 Optional Asynchronous Events Supported 00:26:25.359 Namespace Attribute Notices: Supported 00:26:25.359 Firmware Activation Notices: Not Supported 00:26:25.359 ANA Change Notices: Not Supported 00:26:25.359 PLE Aggregate Log Change Notices: Not Supported 00:26:25.359 LBA Status Info Alert Notices: Not Supported 00:26:25.359 EGE Aggregate Log Change Notices: Not Supported 00:26:25.359 Normal NVM Subsystem Shutdown event: Not Supported 00:26:25.359 Zone Descriptor Change Notices: Not Supported 00:26:25.359 Discovery Log Change Notices: Not Supported 00:26:25.360 Controller Attributes 00:26:25.360 128-bit Host Identifier: Supported 00:26:25.360 Non-Operational Permissive Mode: Not Supported 00:26:25.360 NVM Sets: Not Supported 00:26:25.360 Read Recovery Levels: Not Supported 00:26:25.360 Endurance Groups: Not Supported 00:26:25.360 Predictable Latency Mode: Not Supported 00:26:25.360 Traffic Based Keep ALive: Not Supported 00:26:25.360 Namespace Granularity: Not Supported 00:26:25.360 SQ Associations: Not Supported 00:26:25.360 UUID List: Not Supported 00:26:25.360 Multi-Domain Subsystem: Not Supported 00:26:25.360 Fixed Capacity Management: Not Supported 00:26:25.360 Variable Capacity Management: Not Supported 00:26:25.360 Delete Endurance Group: Not Supported 00:26:25.360 Delete NVM Set: Not Supported 00:26:25.360 Extended LBA Formats Supported: Not Supported 00:26:25.360 Flexible Data Placement Supported: Not Supported 00:26:25.360 00:26:25.360 Controller Memory Buffer Support 00:26:25.360 ================================ 00:26:25.360 Supported: No 00:26:25.360 00:26:25.360 Persistent Memory Region Support 00:26:25.360 ================================ 00:26:25.360 Supported: No 00:26:25.360 00:26:25.360 Admin Command Set Attributes 00:26:25.360 ============================ 00:26:25.360 Security Send/Receive: Not Supported 00:26:25.360 Format NVM: Not Supported 00:26:25.360 Firmware Activate/Download: Not Supported 00:26:25.360 Namespace Management: Not Supported 00:26:25.360 Device Self-Test: Not Supported 00:26:25.360 Directives: Not Supported 00:26:25.360 NVMe-MI: Not Supported 00:26:25.360 Virtualization Management: Not Supported 00:26:25.360 Doorbell Buffer Config: Not Supported 00:26:25.360 Get LBA Status Capability: Not Supported 00:26:25.360 Command & Feature Lockdown Capability: Not Supported 00:26:25.360 Abort Command Limit: 4 00:26:25.360 Async Event Request Limit: 4 00:26:25.360 Number of Firmware Slots: N/A 00:26:25.360 Firmware Slot 1 Read-Only: N/A 00:26:25.360 Firmware Activation Without Reset: N/A 00:26:25.360 Multiple Update Detection Support: N/A 00:26:25.360 Firmware Update Granularity: No Information Provided 00:26:25.360 Per-Namespace SMART Log: No 00:26:25.360 Asymmetric Namespace Access Log Page: Not Supported 00:26:25.360 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:25.360 Command Effects Log Page: Supported 00:26:25.360 Get Log Page Extended Data: Supported 00:26:25.360 Telemetry Log Pages: Not Supported 00:26:25.360 Persistent Event Log Pages: Not Supported 00:26:25.360 Supported Log Pages Log Page: May Support 00:26:25.360 Commands Supported & Effects Log Page: Not Supported 00:26:25.360 Feature Identifiers & Effects Log Page:May Support 00:26:25.360 NVMe-MI Commands & Effects Log Page: May Support 00:26:25.360 Data Area 4 for Telemetry Log: Not Supported 00:26:25.360 Error Log Page Entries Supported: 128 00:26:25.360 Keep Alive: Supported 00:26:25.360 Keep Alive Granularity: 10000 ms 00:26:25.360 00:26:25.360 NVM Command Set Attributes 00:26:25.360 ========================== 00:26:25.360 Submission Queue Entry Size 00:26:25.360 Max: 64 00:26:25.360 Min: 64 00:26:25.360 Completion Queue Entry Size 00:26:25.360 Max: 16 00:26:25.360 Min: 16 00:26:25.360 Number of Namespaces: 32 00:26:25.360 Compare Command: Supported 00:26:25.360 Write Uncorrectable Command: Not Supported 00:26:25.360 Dataset Management Command: Supported 00:26:25.360 Write Zeroes Command: Supported 00:26:25.360 Set Features Save Field: Not Supported 00:26:25.360 Reservations: Supported 00:26:25.360 Timestamp: Not Supported 00:26:25.360 Copy: Supported 00:26:25.360 Volatile Write Cache: Present 00:26:25.360 Atomic Write Unit (Normal): 1 00:26:25.360 Atomic Write Unit (PFail): 1 00:26:25.360 Atomic Compare & Write Unit: 1 00:26:25.360 Fused Compare & Write: Supported 00:26:25.360 Scatter-Gather List 00:26:25.360 SGL Command Set: Supported 00:26:25.360 SGL Keyed: Supported 00:26:25.360 SGL Bit Bucket Descriptor: Not Supported 00:26:25.360 SGL Metadata Pointer: Not Supported 00:26:25.360 Oversized SGL: Not Supported 00:26:25.360 SGL Metadata Address: Not Supported 00:26:25.360 SGL Offset: Supported 00:26:25.360 Transport SGL Data Block: Not Supported 00:26:25.360 Replay Protected Memory Block: Not Supported 00:26:25.360 00:26:25.360 Firmware Slot Information 00:26:25.360 ========================= 00:26:25.360 Active slot: 1 00:26:25.360 Slot 1 Firmware Revision: 24.05 00:26:25.360 00:26:25.360 00:26:25.360 Commands Supported and Effects 00:26:25.360 ============================== 00:26:25.360 Admin Commands 00:26:25.360 -------------- 00:26:25.360 Get Log Page (02h): Supported 00:26:25.360 Identify (06h): Supported 00:26:25.360 Abort (08h): Supported 00:26:25.360 Set Features (09h): Supported 00:26:25.360 Get Features (0Ah): Supported 00:26:25.360 Asynchronous Event Request (0Ch): Supported 00:26:25.360 Keep Alive (18h): Supported 00:26:25.360 I/O Commands 00:26:25.360 ------------ 00:26:25.360 Flush (00h): Supported LBA-Change 00:26:25.360 Write (01h): Supported LBA-Change 00:26:25.360 Read (02h): Supported 00:26:25.360 Compare (05h): Supported 00:26:25.360 Write Zeroes (08h): Supported LBA-Change 00:26:25.360 Dataset Management (09h): Supported LBA-Change 00:26:25.360 Copy (19h): Supported LBA-Change 00:26:25.360 Unknown (79h): Supported LBA-Change 00:26:25.360 Unknown (7Ah): Supported 00:26:25.360 00:26:25.360 Error Log 00:26:25.360 ========= 00:26:25.360 00:26:25.360 Arbitration 00:26:25.360 =========== 00:26:25.360 Arbitration Burst: 1 00:26:25.360 00:26:25.360 Power Management 00:26:25.360 ================ 00:26:25.360 Number of Power States: 1 00:26:25.360 Current Power State: Power State #0 00:26:25.360 Power State #0: 00:26:25.360 Max Power: 0.00 W 00:26:25.360 Non-Operational State: Operational 00:26:25.360 Entry Latency: Not Reported 00:26:25.360 Exit Latency: Not Reported 00:26:25.360 Relative Read Throughput: 0 00:26:25.360 Relative Read Latency: 0 00:26:25.360 Relative Write Throughput: 0 00:26:25.360 Relative Write Latency: 0 00:26:25.360 Idle Power: Not Reported 00:26:25.360 Active Power: Not Reported 00:26:25.360 Non-Operational Permissive Mode: Not Supported 00:26:25.360 00:26:25.360 Health Information 00:26:25.360 ================== 00:26:25.360 Critical Warnings: 00:26:25.360 Available Spare Space: OK 00:26:25.360 Temperature: OK 00:26:25.360 Device Reliability: OK 00:26:25.360 Read Only: No 00:26:25.360 Volatile Memory Backup: OK 00:26:25.360 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:25.360 Temperature Threshold: [2024-04-15 18:13:14.086618] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.086629] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x6631a0) 00:26:25.360 [2024-04-15 18:13:14.086640] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.360 [2024-04-15 18:13:14.086666] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bcc10, cid 7, qid 0 00:26:25.360 [2024-04-15 18:13:14.086884] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.360 [2024-04-15 18:13:14.086896] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.360 [2024-04-15 18:13:14.086903] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.086910] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bcc10) on tqpair=0x6631a0 00:26:25.360 [2024-04-15 18:13:14.086949] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:25.360 [2024-04-15 18:13:14.086970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.360 [2024-04-15 18:13:14.086981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.360 [2024-04-15 18:13:14.086991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.360 [2024-04-15 18:13:14.087000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.360 [2024-04-15 18:13:14.087012] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.087020] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.087027] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6631a0) 00:26:25.360 [2024-04-15 18:13:14.087052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.360 [2024-04-15 18:13:14.087091] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc690, cid 3, qid 0 00:26:25.360 [2024-04-15 18:13:14.087292] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.360 [2024-04-15 18:13:14.087304] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.360 [2024-04-15 18:13:14.087312] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.087319] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc690) on tqpair=0x6631a0 00:26:25.360 [2024-04-15 18:13:14.087330] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.087338] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.087345] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6631a0) 00:26:25.360 [2024-04-15 18:13:14.087355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.360 [2024-04-15 18:13:14.087407] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc690, cid 3, qid 0 00:26:25.360 [2024-04-15 18:13:14.087562] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.360 [2024-04-15 18:13:14.087576] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.360 [2024-04-15 18:13:14.087583] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.087590] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc690) on tqpair=0x6631a0 00:26:25.360 [2024-04-15 18:13:14.087598] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:25.360 [2024-04-15 18:13:14.087606] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:25.360 [2024-04-15 18:13:14.087623] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.087632] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.087638] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6631a0) 00:26:25.360 [2024-04-15 18:13:14.087649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.360 [2024-04-15 18:13:14.087669] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc690, cid 3, qid 0 00:26:25.360 [2024-04-15 18:13:14.087839] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.360 [2024-04-15 18:13:14.087852] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.360 [2024-04-15 18:13:14.087859] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.087866] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc690) on tqpair=0x6631a0 00:26:25.360 [2024-04-15 18:13:14.087882] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.087891] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.087898] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6631a0) 00:26:25.360 [2024-04-15 18:13:14.087908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.360 [2024-04-15 18:13:14.087928] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc690, cid 3, qid 0 00:26:25.360 [2024-04-15 18:13:14.088085] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.360 [2024-04-15 18:13:14.088101] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.360 [2024-04-15 18:13:14.088108] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.088116] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc690) on tqpair=0x6631a0 00:26:25.360 [2024-04-15 18:13:14.088133] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.088143] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.088150] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6631a0) 00:26:25.360 [2024-04-15 18:13:14.088160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.360 [2024-04-15 18:13:14.088182] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc690, cid 3, qid 0 00:26:25.360 [2024-04-15 18:13:14.088334] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.360 [2024-04-15 18:13:14.088364] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.360 [2024-04-15 18:13:14.088371] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.088378] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc690) on tqpair=0x6631a0 00:26:25.360 [2024-04-15 18:13:14.088395] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.088404] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.088411] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6631a0) 00:26:25.360 [2024-04-15 18:13:14.088422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.360 [2024-04-15 18:13:14.088443] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc690, cid 3, qid 0 00:26:25.360 [2024-04-15 18:13:14.088573] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.360 [2024-04-15 18:13:14.088588] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.360 [2024-04-15 18:13:14.088594] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.088601] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc690) on tqpair=0x6631a0 00:26:25.360 [2024-04-15 18:13:14.088617] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.088626] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.088633] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6631a0) 00:26:25.360 [2024-04-15 18:13:14.088644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.360 [2024-04-15 18:13:14.088663] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc690, cid 3, qid 0 00:26:25.360 [2024-04-15 18:13:14.088785] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.360 [2024-04-15 18:13:14.088798] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.360 [2024-04-15 18:13:14.088805] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.088812] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc690) on tqpair=0x6631a0 00:26:25.360 [2024-04-15 18:13:14.088827] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.088836] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.088843] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6631a0) 00:26:25.360 [2024-04-15 18:13:14.088853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.360 [2024-04-15 18:13:14.088873] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc690, cid 3, qid 0 00:26:25.360 [2024-04-15 18:13:14.089054] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.360 [2024-04-15 18:13:14.089076] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.360 [2024-04-15 18:13:14.089083] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.089090] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc690) on tqpair=0x6631a0 00:26:25.360 [2024-04-15 18:13:14.089108] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.089117] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.089124] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6631a0) 00:26:25.360 [2024-04-15 18:13:14.089135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.360 [2024-04-15 18:13:14.089156] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc690, cid 3, qid 0 00:26:25.360 [2024-04-15 18:13:14.089326] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.360 [2024-04-15 18:13:14.089338] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.360 [2024-04-15 18:13:14.089345] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.089367] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc690) on tqpair=0x6631a0 00:26:25.360 [2024-04-15 18:13:14.089383] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.089392] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.089399] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6631a0) 00:26:25.360 [2024-04-15 18:13:14.089409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.360 [2024-04-15 18:13:14.089429] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc690, cid 3, qid 0 00:26:25.360 [2024-04-15 18:13:14.089557] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.360 [2024-04-15 18:13:14.089571] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.360 [2024-04-15 18:13:14.089578] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.089585] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc690) on tqpair=0x6631a0 00:26:25.360 [2024-04-15 18:13:14.089601] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.089610] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.089617] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6631a0) 00:26:25.360 [2024-04-15 18:13:14.089627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.360 [2024-04-15 18:13:14.089648] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc690, cid 3, qid 0 00:26:25.360 [2024-04-15 18:13:14.089775] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.360 [2024-04-15 18:13:14.089787] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.360 [2024-04-15 18:13:14.089797] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.089804] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc690) on tqpair=0x6631a0 00:26:25.360 [2024-04-15 18:13:14.089820] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.089829] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.360 [2024-04-15 18:13:14.089836] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6631a0) 00:26:25.360 [2024-04-15 18:13:14.089846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.360 [2024-04-15 18:13:14.089866] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc690, cid 3, qid 0 00:26:25.360 [2024-04-15 18:13:14.089981] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.360 [2024-04-15 18:13:14.089996] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.361 [2024-04-15 18:13:14.090003] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.361 [2024-04-15 18:13:14.090009] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc690) on tqpair=0x6631a0 00:26:25.361 [2024-04-15 18:13:14.090026] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.361 [2024-04-15 18:13:14.090035] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.361 [2024-04-15 18:13:14.090056] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6631a0) 00:26:25.361 [2024-04-15 18:13:14.094099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.361 [2024-04-15 18:13:14.094123] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bc690, cid 3, qid 0 00:26:25.361 [2024-04-15 18:13:14.094274] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.361 [2024-04-15 18:13:14.094287] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.361 [2024-04-15 18:13:14.094294] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.361 [2024-04-15 18:13:14.094301] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bc690) on tqpair=0x6631a0 00:26:25.361 [2024-04-15 18:13:14.094314] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:26:25.361 0 Kelvin (-273 Celsius) 00:26:25.361 Available Spare: 0% 00:26:25.361 Available Spare Threshold: 0% 00:26:25.361 Life Percentage Used: 0% 00:26:25.361 Data Units Read: 0 00:26:25.361 Data Units Written: 0 00:26:25.361 Host Read Commands: 0 00:26:25.361 Host Write Commands: 0 00:26:25.361 Controller Busy Time: 0 minutes 00:26:25.361 Power Cycles: 0 00:26:25.361 Power On Hours: 0 hours 00:26:25.361 Unsafe Shutdowns: 0 00:26:25.361 Unrecoverable Media Errors: 0 00:26:25.361 Lifetime Error Log Entries: 0 00:26:25.361 Warning Temperature Time: 0 minutes 00:26:25.361 Critical Temperature Time: 0 minutes 00:26:25.361 00:26:25.361 Number of Queues 00:26:25.361 ================ 00:26:25.361 Number of I/O Submission Queues: 127 00:26:25.361 Number of I/O Completion Queues: 127 00:26:25.361 00:26:25.361 Active Namespaces 00:26:25.361 ================= 00:26:25.361 Namespace ID:1 00:26:25.361 Error Recovery Timeout: Unlimited 00:26:25.361 Command Set Identifier: NVM (00h) 00:26:25.361 Deallocate: Supported 00:26:25.361 Deallocated/Unwritten Error: Not Supported 00:26:25.361 Deallocated Read Value: Unknown 00:26:25.361 Deallocate in Write Zeroes: Not Supported 00:26:25.361 Deallocated Guard Field: 0xFFFF 00:26:25.361 Flush: Supported 00:26:25.361 Reservation: Supported 00:26:25.361 Namespace Sharing Capabilities: Multiple Controllers 00:26:25.361 Size (in LBAs): 131072 (0GiB) 00:26:25.361 Capacity (in LBAs): 131072 (0GiB) 00:26:25.361 Utilization (in LBAs): 131072 (0GiB) 00:26:25.361 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:25.361 EUI64: ABCDEF0123456789 00:26:25.361 UUID: 4d2eed9e-9523-4c1c-a653-26dfeb4faf4c 00:26:25.361 Thin Provisioning: Not Supported 00:26:25.361 Per-NS Atomic Units: Yes 00:26:25.361 Atomic Boundary Size (Normal): 0 00:26:25.361 Atomic Boundary Size (PFail): 0 00:26:25.361 Atomic Boundary Offset: 0 00:26:25.361 Maximum Single Source Range Length: 65535 00:26:25.361 Maximum Copy Length: 65535 00:26:25.361 Maximum Source Range Count: 1 00:26:25.361 NGUID/EUI64 Never Reused: No 00:26:25.361 Namespace Write Protected: No 00:26:25.361 Number of LBA Formats: 1 00:26:25.361 Current LBA Format: LBA Format #00 00:26:25.361 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:25.361 00:26:25.361 18:13:14 -- host/identify.sh@51 -- # sync 00:26:25.361 18:13:14 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:25.361 18:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.361 18:13:14 -- common/autotest_common.sh@10 -- # set +x 00:26:25.361 18:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.361 18:13:14 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:25.361 18:13:14 -- host/identify.sh@56 -- # nvmftestfini 00:26:25.361 18:13:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:25.361 18:13:14 -- nvmf/common.sh@117 -- # sync 00:26:25.361 18:13:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:25.361 18:13:14 -- nvmf/common.sh@120 -- # set +e 00:26:25.361 18:13:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:25.361 18:13:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:25.361 rmmod nvme_tcp 00:26:25.361 rmmod nvme_fabrics 00:26:25.361 rmmod nvme_keyring 00:26:25.361 18:13:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:25.361 18:13:14 -- nvmf/common.sh@124 -- # set -e 00:26:25.361 18:13:14 -- nvmf/common.sh@125 -- # return 0 00:26:25.361 18:13:14 -- nvmf/common.sh@478 -- # '[' -n 3404990 ']' 00:26:25.361 18:13:14 -- nvmf/common.sh@479 -- # killprocess 3404990 00:26:25.361 18:13:14 -- common/autotest_common.sh@936 -- # '[' -z 3404990 ']' 00:26:25.361 18:13:14 -- common/autotest_common.sh@940 -- # kill -0 3404990 00:26:25.361 18:13:14 -- common/autotest_common.sh@941 -- # uname 00:26:25.361 18:13:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:25.361 18:13:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3404990 00:26:25.361 18:13:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:25.361 18:13:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:25.361 18:13:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3404990' 00:26:25.361 killing process with pid 3404990 00:26:25.361 18:13:14 -- common/autotest_common.sh@955 -- # kill 3404990 00:26:25.361 [2024-04-15 18:13:14.210314] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:25.361 18:13:14 -- common/autotest_common.sh@960 -- # wait 3404990 00:26:25.619 18:13:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:25.619 18:13:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:25.619 18:13:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:25.619 18:13:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:25.619 18:13:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:25.619 18:13:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.619 18:13:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.619 18:13:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.152 18:13:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:28.152 00:26:28.152 real 0m6.036s 00:26:28.152 user 0m4.770s 00:26:28.152 sys 0m2.408s 00:26:28.152 18:13:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:28.152 18:13:16 -- common/autotest_common.sh@10 -- # set +x 00:26:28.152 ************************************ 00:26:28.152 END TEST nvmf_identify 00:26:28.152 ************************************ 00:26:28.152 18:13:16 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:28.152 18:13:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:28.152 18:13:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:28.152 18:13:16 -- common/autotest_common.sh@10 -- # set +x 00:26:28.152 ************************************ 00:26:28.152 START TEST nvmf_perf 00:26:28.152 ************************************ 00:26:28.152 18:13:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:28.152 * Looking for test storage... 00:26:28.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:28.152 18:13:16 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:28.152 18:13:16 -- nvmf/common.sh@7 -- # uname -s 00:26:28.152 18:13:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.152 18:13:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.152 18:13:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.152 18:13:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.152 18:13:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.152 18:13:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.152 18:13:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.152 18:13:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.152 18:13:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.152 18:13:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.152 18:13:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:28.152 18:13:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:28.152 18:13:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.152 18:13:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.152 18:13:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:28.152 18:13:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.152 18:13:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:28.152 18:13:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.152 18:13:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.152 18:13:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.152 18:13:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.152 18:13:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.152 18:13:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.152 18:13:16 -- paths/export.sh@5 -- # export PATH 00:26:28.152 18:13:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.152 18:13:16 -- nvmf/common.sh@47 -- # : 0 00:26:28.152 18:13:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:28.152 18:13:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:28.152 18:13:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.152 18:13:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.152 18:13:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.152 18:13:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:28.152 18:13:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:28.152 18:13:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:28.152 18:13:16 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:28.152 18:13:16 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:28.153 18:13:16 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:28.153 18:13:16 -- host/perf.sh@17 -- # nvmftestinit 00:26:28.153 18:13:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:28.153 18:13:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.153 18:13:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:28.153 18:13:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:28.153 18:13:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:28.153 18:13:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.153 18:13:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:28.153 18:13:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.153 18:13:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:28.153 18:13:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:28.153 18:13:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:28.153 18:13:16 -- common/autotest_common.sh@10 -- # set +x 00:26:30.055 18:13:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:30.055 18:13:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:30.055 18:13:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:30.055 18:13:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:30.055 18:13:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:30.055 18:13:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:30.055 18:13:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:30.055 18:13:18 -- nvmf/common.sh@295 -- # net_devs=() 00:26:30.055 18:13:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:30.055 18:13:18 -- nvmf/common.sh@296 -- # e810=() 00:26:30.055 18:13:18 -- nvmf/common.sh@296 -- # local -ga e810 00:26:30.055 18:13:18 -- nvmf/common.sh@297 -- # x722=() 00:26:30.055 18:13:18 -- nvmf/common.sh@297 -- # local -ga x722 00:26:30.055 18:13:18 -- nvmf/common.sh@298 -- # mlx=() 00:26:30.055 18:13:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:30.055 18:13:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.055 18:13:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.055 18:13:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.055 18:13:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.055 18:13:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.055 18:13:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.055 18:13:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.055 18:13:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.055 18:13:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.055 18:13:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.055 18:13:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.055 18:13:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:30.056 18:13:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:30.056 18:13:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:30.056 18:13:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:30.056 18:13:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:30.056 18:13:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:30.056 18:13:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:30.056 18:13:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:30.056 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:30.056 18:13:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:30.056 18:13:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:30.056 18:13:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.056 18:13:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.056 18:13:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:30.056 18:13:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:30.056 18:13:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:30.056 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:30.056 18:13:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:30.056 18:13:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:30.056 18:13:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.056 18:13:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.056 18:13:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:30.056 18:13:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:30.056 18:13:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:30.056 18:13:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:30.056 18:13:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:30.056 18:13:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.056 18:13:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:30.056 18:13:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.056 18:13:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:30.056 Found net devices under 0000:84:00.0: cvl_0_0 00:26:30.056 18:13:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.056 18:13:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:30.056 18:13:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.056 18:13:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:30.056 18:13:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.056 18:13:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:30.056 Found net devices under 0000:84:00.1: cvl_0_1 00:26:30.056 18:13:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.056 18:13:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:30.056 18:13:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:30.056 18:13:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:30.056 18:13:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:30.056 18:13:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:30.056 18:13:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:30.056 18:13:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:30.056 18:13:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:30.056 18:13:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:30.056 18:13:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:30.056 18:13:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:30.056 18:13:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:30.056 18:13:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:30.056 18:13:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.056 18:13:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:30.056 18:13:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:30.314 18:13:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:30.314 18:13:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:30.314 18:13:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:30.314 18:13:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:30.314 18:13:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:30.314 18:13:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:30.314 18:13:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:30.314 18:13:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:30.314 18:13:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:30.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:30.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:26:30.314 00:26:30.314 --- 10.0.0.2 ping statistics --- 00:26:30.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.314 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:26:30.314 18:13:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:30.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:30.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:26:30.314 00:26:30.314 --- 10.0.0.1 ping statistics --- 00:26:30.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.314 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:26:30.314 18:13:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:30.314 18:13:19 -- nvmf/common.sh@411 -- # return 0 00:26:30.314 18:13:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:30.314 18:13:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:30.314 18:13:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:30.314 18:13:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:30.314 18:13:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:30.314 18:13:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:30.314 18:13:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:30.314 18:13:19 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:30.314 18:13:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:30.314 18:13:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:30.314 18:13:19 -- common/autotest_common.sh@10 -- # set +x 00:26:30.314 18:13:19 -- nvmf/common.sh@470 -- # nvmfpid=3407111 00:26:30.314 18:13:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:30.314 18:13:19 -- nvmf/common.sh@471 -- # waitforlisten 3407111 00:26:30.314 18:13:19 -- common/autotest_common.sh@817 -- # '[' -z 3407111 ']' 00:26:30.314 18:13:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.314 18:13:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:30.314 18:13:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.314 18:13:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:30.314 18:13:19 -- common/autotest_common.sh@10 -- # set +x 00:26:30.314 [2024-04-15 18:13:19.235589] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:26:30.314 [2024-04-15 18:13:19.235678] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.572 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.572 [2024-04-15 18:13:19.310716] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:30.572 [2024-04-15 18:13:19.395806] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.572 [2024-04-15 18:13:19.395875] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.572 [2024-04-15 18:13:19.395889] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.572 [2024-04-15 18:13:19.395901] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.572 [2024-04-15 18:13:19.395911] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.572 [2024-04-15 18:13:19.395993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.572 [2024-04-15 18:13:19.396017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:30.572 [2024-04-15 18:13:19.396080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:30.572 [2024-04-15 18:13:19.396083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.572 18:13:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:30.572 18:13:19 -- common/autotest_common.sh@850 -- # return 0 00:26:30.572 18:13:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:30.572 18:13:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:30.572 18:13:19 -- common/autotest_common.sh@10 -- # set +x 00:26:30.829 18:13:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:30.829 18:13:19 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:30.829 18:13:19 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:34.113 18:13:22 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:34.113 18:13:22 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:34.371 18:13:23 -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:26:34.371 18:13:23 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:34.938 18:13:23 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:34.938 18:13:23 -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:26:34.938 18:13:23 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:34.938 18:13:23 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:34.938 18:13:23 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:35.195 [2024-04-15 18:13:23.908100] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.195 18:13:23 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:35.469 18:13:24 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:35.469 18:13:24 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:35.746 18:13:24 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:35.746 18:13:24 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:36.312 18:13:25 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.878 [2024-04-15 18:13:25.582188] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.878 18:13:25 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:37.136 18:13:25 -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:26:37.136 18:13:25 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:26:37.136 18:13:25 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:37.136 18:13:25 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:26:38.510 Initializing NVMe Controllers 00:26:38.510 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:26:38.510 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:26:38.510 Initialization complete. Launching workers. 00:26:38.510 ======================================================== 00:26:38.510 Latency(us) 00:26:38.510 Device Information : IOPS MiB/s Average min max 00:26:38.510 PCIE (0000:82:00.0) NSID 1 from core 0: 84875.71 331.55 376.43 11.72 5260.86 00:26:38.510 ======================================================== 00:26:38.510 Total : 84875.71 331.55 376.43 11.72 5260.86 00:26:38.510 00:26:38.510 18:13:27 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:38.510 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.887 Initializing NVMe Controllers 00:26:39.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:39.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:39.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:39.887 Initialization complete. Launching workers. 00:26:39.887 ======================================================== 00:26:39.887 Latency(us) 00:26:39.887 Device Information : IOPS MiB/s Average min max 00:26:39.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 91.77 0.36 10983.65 194.79 45896.17 00:26:39.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 53.86 0.21 19008.09 6793.72 48719.40 00:26:39.887 ======================================================== 00:26:39.887 Total : 145.63 0.57 13951.60 194.79 48719.40 00:26:39.887 00:26:39.887 18:13:28 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:39.887 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.261 Initializing NVMe Controllers 00:26:41.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:41.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:41.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:41.261 Initialization complete. Launching workers. 00:26:41.261 ======================================================== 00:26:41.261 Latency(us) 00:26:41.261 Device Information : IOPS MiB/s Average min max 00:26:41.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8330.34 32.54 3844.12 540.72 7717.10 00:26:41.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3843.16 15.01 8369.45 6211.71 16239.41 00:26:41.261 ======================================================== 00:26:41.261 Total : 12173.51 47.55 5272.76 540.72 16239.41 00:26:41.261 00:26:41.261 18:13:29 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:41.261 18:13:29 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:41.261 18:13:29 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:41.261 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.792 Initializing NVMe Controllers 00:26:43.792 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:43.792 Controller IO queue size 128, less than required. 00:26:43.792 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:43.792 Controller IO queue size 128, less than required. 00:26:43.792 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:43.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:43.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:43.792 Initialization complete. Launching workers. 00:26:43.792 ======================================================== 00:26:43.792 Latency(us) 00:26:43.792 Device Information : IOPS MiB/s Average min max 00:26:43.792 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1088.70 272.18 120215.92 79773.51 171943.66 00:26:43.792 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 600.01 150.00 224189.17 143361.80 321562.86 00:26:43.792 ======================================================== 00:26:43.792 Total : 1688.71 422.18 157158.23 79773.51 321562.86 00:26:43.792 00:26:43.792 18:13:32 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:43.792 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.792 No valid NVMe controllers or AIO or URING devices found 00:26:43.792 Initializing NVMe Controllers 00:26:43.792 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:43.792 Controller IO queue size 128, less than required. 00:26:43.792 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:43.792 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:43.792 Controller IO queue size 128, less than required. 00:26:43.792 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:43.792 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:43.792 WARNING: Some requested NVMe devices were skipped 00:26:43.792 18:13:32 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:44.050 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.580 Initializing NVMe Controllers 00:26:46.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:46.580 Controller IO queue size 128, less than required. 00:26:46.580 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:46.580 Controller IO queue size 128, less than required. 00:26:46.580 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:46.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:46.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:46.580 Initialization complete. Launching workers. 00:26:46.580 00:26:46.580 ==================== 00:26:46.580 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:46.580 TCP transport: 00:26:46.580 polls: 18249 00:26:46.580 idle_polls: 8637 00:26:46.580 sock_completions: 9612 00:26:46.580 nvme_completions: 4631 00:26:46.580 submitted_requests: 6922 00:26:46.580 queued_requests: 1 00:26:46.580 00:26:46.580 ==================== 00:26:46.580 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:46.580 TCP transport: 00:26:46.580 polls: 18839 00:26:46.580 idle_polls: 8961 00:26:46.580 sock_completions: 9878 00:26:46.580 nvme_completions: 4747 00:26:46.580 submitted_requests: 7144 00:26:46.580 queued_requests: 1 00:26:46.580 ======================================================== 00:26:46.580 Latency(us) 00:26:46.580 Device Information : IOPS MiB/s Average min max 00:26:46.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1157.34 289.34 113303.62 69906.10 158473.69 00:26:46.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1186.34 296.59 109845.49 49295.16 142628.37 00:26:46.580 ======================================================== 00:26:46.580 Total : 2343.69 585.92 111553.16 49295.16 158473.69 00:26:46.580 00:26:46.580 18:13:35 -- host/perf.sh@66 -- # sync 00:26:46.580 18:13:35 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:46.839 18:13:35 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:26:46.839 18:13:35 -- host/perf.sh@71 -- # '[' -n 0000:82:00.0 ']' 00:26:46.839 18:13:35 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:26:50.131 18:13:38 -- host/perf.sh@72 -- # ls_guid=433a8ebc-c11d-4a7c-b3c7-ba4405e0b731 00:26:50.131 18:13:38 -- host/perf.sh@73 -- # get_lvs_free_mb 433a8ebc-c11d-4a7c-b3c7-ba4405e0b731 00:26:50.131 18:13:38 -- common/autotest_common.sh@1350 -- # local lvs_uuid=433a8ebc-c11d-4a7c-b3c7-ba4405e0b731 00:26:50.131 18:13:38 -- common/autotest_common.sh@1351 -- # local lvs_info 00:26:50.131 18:13:38 -- common/autotest_common.sh@1352 -- # local fc 00:26:50.131 18:13:38 -- common/autotest_common.sh@1353 -- # local cs 00:26:50.131 18:13:38 -- common/autotest_common.sh@1354 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:50.390 18:13:39 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:26:50.390 { 00:26:50.390 "uuid": "433a8ebc-c11d-4a7c-b3c7-ba4405e0b731", 00:26:50.390 "name": "lvs_0", 00:26:50.390 "base_bdev": "Nvme0n1", 00:26:50.390 "total_data_clusters": 238234, 00:26:50.390 "free_clusters": 238234, 00:26:50.390 "block_size": 512, 00:26:50.390 "cluster_size": 4194304 00:26:50.390 } 00:26:50.390 ]' 00:26:50.390 18:13:39 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="433a8ebc-c11d-4a7c-b3c7-ba4405e0b731") .free_clusters' 00:26:50.390 18:13:39 -- common/autotest_common.sh@1355 -- # fc=238234 00:26:50.390 18:13:39 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="433a8ebc-c11d-4a7c-b3c7-ba4405e0b731") .cluster_size' 00:26:50.390 18:13:39 -- common/autotest_common.sh@1356 -- # cs=4194304 00:26:50.390 18:13:39 -- common/autotest_common.sh@1359 -- # free_mb=952936 00:26:50.390 18:13:39 -- common/autotest_common.sh@1360 -- # echo 952936 00:26:50.390 952936 00:26:50.390 18:13:39 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:26:50.390 18:13:39 -- host/perf.sh@78 -- # free_mb=20480 00:26:50.391 18:13:39 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 433a8ebc-c11d-4a7c-b3c7-ba4405e0b731 lbd_0 20480 00:26:51.328 18:13:40 -- host/perf.sh@80 -- # lb_guid=ce2c4e8e-b249-40d4-9d56-a2d50cb695a7 00:26:51.328 18:13:40 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore ce2c4e8e-b249-40d4-9d56-a2d50cb695a7 lvs_n_0 00:26:52.267 18:13:41 -- host/perf.sh@83 -- # ls_nested_guid=ec05b852-2255-4252-838b-acc734b25c33 00:26:52.267 18:13:41 -- host/perf.sh@84 -- # get_lvs_free_mb ec05b852-2255-4252-838b-acc734b25c33 00:26:52.267 18:13:41 -- common/autotest_common.sh@1350 -- # local lvs_uuid=ec05b852-2255-4252-838b-acc734b25c33 00:26:52.267 18:13:41 -- common/autotest_common.sh@1351 -- # local lvs_info 00:26:52.267 18:13:41 -- common/autotest_common.sh@1352 -- # local fc 00:26:52.267 18:13:41 -- common/autotest_common.sh@1353 -- # local cs 00:26:52.267 18:13:41 -- common/autotest_common.sh@1354 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:52.835 18:13:41 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:26:52.835 { 00:26:52.835 "uuid": "433a8ebc-c11d-4a7c-b3c7-ba4405e0b731", 00:26:52.835 "name": "lvs_0", 00:26:52.835 "base_bdev": "Nvme0n1", 00:26:52.835 "total_data_clusters": 238234, 00:26:52.835 "free_clusters": 233114, 00:26:52.835 "block_size": 512, 00:26:52.835 "cluster_size": 4194304 00:26:52.835 }, 00:26:52.835 { 00:26:52.835 "uuid": "ec05b852-2255-4252-838b-acc734b25c33", 00:26:52.835 "name": "lvs_n_0", 00:26:52.835 "base_bdev": "ce2c4e8e-b249-40d4-9d56-a2d50cb695a7", 00:26:52.835 "total_data_clusters": 5114, 00:26:52.835 "free_clusters": 5114, 00:26:52.835 "block_size": 512, 00:26:52.835 "cluster_size": 4194304 00:26:52.835 } 00:26:52.835 ]' 00:26:52.836 18:13:41 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="ec05b852-2255-4252-838b-acc734b25c33") .free_clusters' 00:26:52.836 18:13:41 -- common/autotest_common.sh@1355 -- # fc=5114 00:26:52.836 18:13:41 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="ec05b852-2255-4252-838b-acc734b25c33") .cluster_size' 00:26:52.836 18:13:41 -- common/autotest_common.sh@1356 -- # cs=4194304 00:26:52.836 18:13:41 -- common/autotest_common.sh@1359 -- # free_mb=20456 00:26:52.836 18:13:41 -- common/autotest_common.sh@1360 -- # echo 20456 00:26:52.836 20456 00:26:52.836 18:13:41 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:26:52.836 18:13:41 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ec05b852-2255-4252-838b-acc734b25c33 lbd_nest_0 20456 00:26:53.403 18:13:42 -- host/perf.sh@88 -- # lb_nested_guid=e0dc17e3-e31d-4a9f-9751-1c238fbc5510 00:26:53.403 18:13:42 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:53.662 18:13:42 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:26:53.662 18:13:42 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 e0dc17e3-e31d-4a9f-9751-1c238fbc5510 00:26:53.920 18:13:42 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:54.527 18:13:43 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:26:54.527 18:13:43 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:26:54.527 18:13:43 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:54.527 18:13:43 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:54.527 18:13:43 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:54.527 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.748 Initializing NVMe Controllers 00:27:06.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:06.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:06.748 Initialization complete. Launching workers. 00:27:06.748 ======================================================== 00:27:06.748 Latency(us) 00:27:06.748 Device Information : IOPS MiB/s Average min max 00:27:06.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.20 0.02 21253.14 211.55 46002.96 00:27:06.748 ======================================================== 00:27:06.749 Total : 47.20 0.02 21253.14 211.55 46002.96 00:27:06.749 00:27:06.749 18:13:53 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:06.749 18:13:53 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:06.749 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.738 Initializing NVMe Controllers 00:27:16.738 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:16.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:16.738 Initialization complete. Launching workers. 00:27:16.738 ======================================================== 00:27:16.738 Latency(us) 00:27:16.738 Device Information : IOPS MiB/s Average min max 00:27:16.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.30 10.29 12167.72 6988.30 47884.77 00:27:16.738 ======================================================== 00:27:16.738 Total : 82.30 10.29 12167.72 6988.30 47884.77 00:27:16.738 00:27:16.738 18:14:03 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:16.738 18:14:03 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:16.738 18:14:03 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:16.738 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.723 Initializing NVMe Controllers 00:27:26.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:26.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:26.723 Initialization complete. Launching workers. 00:27:26.723 ======================================================== 00:27:26.723 Latency(us) 00:27:26.723 Device Information : IOPS MiB/s Average min max 00:27:26.723 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6985.82 3.41 4581.27 244.48 12337.09 00:27:26.723 ======================================================== 00:27:26.723 Total : 6985.82 3.41 4581.27 244.48 12337.09 00:27:26.723 00:27:26.723 18:14:14 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:26.723 18:14:14 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:26.723 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.708 Initializing NVMe Controllers 00:27:36.708 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:36.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:36.708 Initialization complete. Launching workers. 00:27:36.708 ======================================================== 00:27:36.709 Latency(us) 00:27:36.709 Device Information : IOPS MiB/s Average min max 00:27:36.709 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1581.51 197.69 20255.78 1682.56 46332.92 00:27:36.709 ======================================================== 00:27:36.709 Total : 1581.51 197.69 20255.78 1682.56 46332.92 00:27:36.709 00:27:36.709 18:14:24 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:36.709 18:14:24 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:36.709 18:14:24 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:36.709 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.704 Initializing NVMe Controllers 00:27:46.704 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:46.704 Controller IO queue size 128, less than required. 00:27:46.705 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:46.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:46.705 Initialization complete. Launching workers. 00:27:46.705 ======================================================== 00:27:46.705 Latency(us) 00:27:46.705 Device Information : IOPS MiB/s Average min max 00:27:46.705 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11935.33 5.83 10727.89 1650.51 23698.85 00:27:46.705 ======================================================== 00:27:46.705 Total : 11935.33 5.83 10727.89 1650.51 23698.85 00:27:46.705 00:27:46.705 18:14:34 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:46.705 18:14:34 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:46.705 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.681 Initializing NVMe Controllers 00:27:56.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:56.681 Controller IO queue size 128, less than required. 00:27:56.681 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:56.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:56.681 Initialization complete. Launching workers. 00:27:56.681 ======================================================== 00:27:56.681 Latency(us) 00:27:56.681 Device Information : IOPS MiB/s Average min max 00:27:56.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1185.21 148.15 108692.24 22841.53 237659.95 00:27:56.681 ======================================================== 00:27:56.681 Total : 1185.21 148.15 108692.24 22841.53 237659.95 00:27:56.681 00:27:56.681 18:14:45 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:56.681 18:14:45 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e0dc17e3-e31d-4a9f-9751-1c238fbc5510 00:27:57.617 18:14:46 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:58.186 18:14:47 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ce2c4e8e-b249-40d4-9d56-a2d50cb695a7 00:27:58.755 18:14:47 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:59.325 18:14:48 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:59.325 18:14:48 -- host/perf.sh@114 -- # nvmftestfini 00:27:59.325 18:14:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:59.325 18:14:48 -- nvmf/common.sh@117 -- # sync 00:27:59.325 18:14:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:59.325 18:14:48 -- nvmf/common.sh@120 -- # set +e 00:27:59.325 18:14:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:59.325 18:14:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:59.325 rmmod nvme_tcp 00:27:59.325 rmmod nvme_fabrics 00:27:59.325 rmmod nvme_keyring 00:27:59.325 18:14:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:59.325 18:14:48 -- nvmf/common.sh@124 -- # set -e 00:27:59.325 18:14:48 -- nvmf/common.sh@125 -- # return 0 00:27:59.325 18:14:48 -- nvmf/common.sh@478 -- # '[' -n 3407111 ']' 00:27:59.325 18:14:48 -- nvmf/common.sh@479 -- # killprocess 3407111 00:27:59.325 18:14:48 -- common/autotest_common.sh@936 -- # '[' -z 3407111 ']' 00:27:59.325 18:14:48 -- common/autotest_common.sh@940 -- # kill -0 3407111 00:27:59.325 18:14:48 -- common/autotest_common.sh@941 -- # uname 00:27:59.325 18:14:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:59.325 18:14:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3407111 00:27:59.325 18:14:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:59.325 18:14:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:59.325 18:14:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3407111' 00:27:59.325 killing process with pid 3407111 00:27:59.325 18:14:48 -- common/autotest_common.sh@955 -- # kill 3407111 00:27:59.325 18:14:48 -- common/autotest_common.sh@960 -- # wait 3407111 00:28:01.233 18:14:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:01.233 18:14:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:01.233 18:14:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:01.233 18:14:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:01.233 18:14:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:01.233 18:14:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.233 18:14:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.233 18:14:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.141 18:14:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:03.141 00:28:03.141 real 1m35.167s 00:28:03.141 user 5m54.637s 00:28:03.141 sys 0m17.537s 00:28:03.141 18:14:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:03.141 18:14:51 -- common/autotest_common.sh@10 -- # set +x 00:28:03.141 ************************************ 00:28:03.141 END TEST nvmf_perf 00:28:03.141 ************************************ 00:28:03.141 18:14:51 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:03.141 18:14:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:03.141 18:14:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:03.141 18:14:51 -- common/autotest_common.sh@10 -- # set +x 00:28:03.141 ************************************ 00:28:03.141 START TEST nvmf_fio_host 00:28:03.141 ************************************ 00:28:03.141 18:14:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:03.141 * Looking for test storage... 00:28:03.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.141 18:14:52 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.141 18:14:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.141 18:14:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.141 18:14:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.141 18:14:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.141 18:14:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.141 18:14:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.141 18:14:52 -- paths/export.sh@5 -- # export PATH 00:28:03.141 18:14:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.141 18:14:52 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.141 18:14:52 -- nvmf/common.sh@7 -- # uname -s 00:28:03.141 18:14:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.141 18:14:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.141 18:14:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.141 18:14:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.141 18:14:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.141 18:14:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.141 18:14:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.141 18:14:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.141 18:14:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.141 18:14:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.141 18:14:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:03.141 18:14:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:03.141 18:14:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.141 18:14:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.141 18:14:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.141 18:14:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.141 18:14:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.141 18:14:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.141 18:14:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.141 18:14:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.141 18:14:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.141 18:14:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.141 18:14:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.141 18:14:52 -- paths/export.sh@5 -- # export PATH 00:28:03.141 18:14:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.141 18:14:52 -- nvmf/common.sh@47 -- # : 0 00:28:03.141 18:14:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:03.141 18:14:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:03.141 18:14:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.141 18:14:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.141 18:14:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.141 18:14:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:03.141 18:14:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:03.141 18:14:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:03.141 18:14:52 -- host/fio.sh@12 -- # nvmftestinit 00:28:03.141 18:14:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:03.141 18:14:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.141 18:14:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:03.141 18:14:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:03.141 18:14:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:03.141 18:14:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.141 18:14:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.141 18:14:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.141 18:14:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:03.141 18:14:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:03.141 18:14:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:03.141 18:14:52 -- common/autotest_common.sh@10 -- # set +x 00:28:05.684 18:14:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:05.684 18:14:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:05.684 18:14:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:05.684 18:14:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:05.684 18:14:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:05.684 18:14:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:05.684 18:14:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:05.684 18:14:54 -- nvmf/common.sh@295 -- # net_devs=() 00:28:05.684 18:14:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:05.684 18:14:54 -- nvmf/common.sh@296 -- # e810=() 00:28:05.684 18:14:54 -- nvmf/common.sh@296 -- # local -ga e810 00:28:05.685 18:14:54 -- nvmf/common.sh@297 -- # x722=() 00:28:05.685 18:14:54 -- nvmf/common.sh@297 -- # local -ga x722 00:28:05.685 18:14:54 -- nvmf/common.sh@298 -- # mlx=() 00:28:05.685 18:14:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:05.685 18:14:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:05.685 18:14:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:05.685 18:14:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:05.685 18:14:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:05.685 18:14:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:05.685 18:14:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:05.685 18:14:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:05.685 18:14:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:05.685 18:14:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:05.685 18:14:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:05.685 18:14:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:05.685 18:14:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:05.685 18:14:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:05.685 18:14:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:05.685 18:14:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.685 18:14:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:05.685 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:05.685 18:14:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.685 18:14:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:05.685 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:05.685 18:14:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:05.685 18:14:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.685 18:14:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.685 18:14:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:05.685 18:14:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.685 18:14:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:05.685 Found net devices under 0000:84:00.0: cvl_0_0 00:28:05.685 18:14:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.685 18:14:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.685 18:14:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.685 18:14:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:05.685 18:14:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.685 18:14:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:05.685 Found net devices under 0000:84:00.1: cvl_0_1 00:28:05.685 18:14:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.685 18:14:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:05.685 18:14:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:05.685 18:14:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:05.685 18:14:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:05.685 18:14:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:05.685 18:14:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:05.685 18:14:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:05.685 18:14:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:05.685 18:14:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:05.685 18:14:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:05.685 18:14:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:05.685 18:14:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:05.685 18:14:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:05.685 18:14:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:05.685 18:14:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:05.685 18:14:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:05.685 18:14:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:05.685 18:14:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:05.685 18:14:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:05.685 18:14:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:05.685 18:14:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:05.685 18:14:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:05.685 18:14:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:05.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:05.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:28:05.685 00:28:05.685 --- 10.0.0.2 ping statistics --- 00:28:05.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.685 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:28:05.685 18:14:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:05.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:05.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:28:05.685 00:28:05.685 --- 10.0.0.1 ping statistics --- 00:28:05.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.685 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:28:05.685 18:14:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:05.685 18:14:54 -- nvmf/common.sh@411 -- # return 0 00:28:05.685 18:14:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:05.685 18:14:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:05.685 18:14:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:05.685 18:14:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:05.685 18:14:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:05.685 18:14:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:05.685 18:14:54 -- host/fio.sh@14 -- # [[ y != y ]] 00:28:05.685 18:14:54 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:28:05.685 18:14:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:05.685 18:14:54 -- common/autotest_common.sh@10 -- # set +x 00:28:05.685 18:14:54 -- host/fio.sh@22 -- # nvmfpid=3419434 00:28:05.685 18:14:54 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:05.685 18:14:54 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:05.685 18:14:54 -- host/fio.sh@26 -- # waitforlisten 3419434 00:28:05.685 18:14:54 -- common/autotest_common.sh@817 -- # '[' -z 3419434 ']' 00:28:05.685 18:14:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.685 18:14:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:05.685 18:14:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.685 18:14:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:05.685 18:14:54 -- common/autotest_common.sh@10 -- # set +x 00:28:05.685 [2024-04-15 18:14:54.430983] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:28:05.685 [2024-04-15 18:14:54.431078] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.685 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.685 [2024-04-15 18:14:54.506629] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:05.685 [2024-04-15 18:14:54.600469] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.685 [2024-04-15 18:14:54.600531] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.685 [2024-04-15 18:14:54.600548] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.685 [2024-04-15 18:14:54.600562] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.685 [2024-04-15 18:14:54.600575] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.685 [2024-04-15 18:14:54.600649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.685 [2024-04-15 18:14:54.600730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:05.685 [2024-04-15 18:14:54.600733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.685 [2024-04-15 18:14:54.600681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:05.943 18:14:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:05.943 18:14:54 -- common/autotest_common.sh@850 -- # return 0 00:28:05.943 18:14:54 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:05.943 18:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.943 18:14:54 -- common/autotest_common.sh@10 -- # set +x 00:28:05.943 [2024-04-15 18:14:54.731735] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.943 18:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.943 18:14:54 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:28:05.943 18:14:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:05.943 18:14:54 -- common/autotest_common.sh@10 -- # set +x 00:28:05.943 18:14:54 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:05.943 18:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.943 18:14:54 -- common/autotest_common.sh@10 -- # set +x 00:28:05.943 Malloc1 00:28:05.943 18:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.943 18:14:54 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:05.943 18:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.943 18:14:54 -- common/autotest_common.sh@10 -- # set +x 00:28:05.943 18:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.943 18:14:54 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:05.943 18:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.943 18:14:54 -- common/autotest_common.sh@10 -- # set +x 00:28:05.943 18:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.944 18:14:54 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:05.944 18:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.944 18:14:54 -- common/autotest_common.sh@10 -- # set +x 00:28:05.944 [2024-04-15 18:14:54.803818] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.944 18:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.944 18:14:54 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:05.944 18:14:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.944 18:14:54 -- common/autotest_common.sh@10 -- # set +x 00:28:05.944 18:14:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.944 18:14:54 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:05.944 18:14:54 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:05.944 18:14:54 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:05.944 18:14:54 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:05.944 18:14:54 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:05.944 18:14:54 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:05.944 18:14:54 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:05.944 18:14:54 -- common/autotest_common.sh@1327 -- # shift 00:28:05.944 18:14:54 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:05.944 18:14:54 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:05.944 18:14:54 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:05.944 18:14:54 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:05.944 18:14:54 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:05.944 18:14:54 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:05.944 18:14:54 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:05.944 18:14:54 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:05.944 18:14:54 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:05.944 18:14:54 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:05.944 18:14:54 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:05.944 18:14:54 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:05.944 18:14:54 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:05.944 18:14:54 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:05.944 18:14:54 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:06.203 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:06.203 fio-3.35 00:28:06.203 Starting 1 thread 00:28:06.203 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.738 00:28:08.738 test: (groupid=0, jobs=1): err= 0: pid=3419560: Mon Apr 15 18:14:57 2024 00:28:08.738 read: IOPS=8935, BW=34.9MiB/s (36.6MB/s)(70.0MiB/2006msec) 00:28:08.738 slat (usec): min=2, max=121, avg= 2.92, stdev= 1.37 00:28:08.738 clat (usec): min=2100, max=13194, avg=7913.54, stdev=578.27 00:28:08.738 lat (usec): min=2122, max=13196, avg=7916.46, stdev=578.17 00:28:08.738 clat percentiles (usec): 00:28:08.738 | 1.00th=[ 6521], 5.00th=[ 7046], 10.00th=[ 7242], 20.00th=[ 7504], 00:28:08.738 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:28:08.738 | 70.00th=[ 8225], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8717], 00:28:08.738 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[11207], 99.95th=[12125], 00:28:08.738 | 99.99th=[13173] 00:28:08.738 bw ( KiB/s): min=34864, max=36344, per=99.90%, avg=35704.00, stdev=615.22, samples=4 00:28:08.738 iops : min= 8716, max= 9086, avg=8926.00, stdev=153.81, samples=4 00:28:08.738 write: IOPS=8948, BW=35.0MiB/s (36.7MB/s)(70.1MiB/2006msec); 0 zone resets 00:28:08.738 slat (usec): min=2, max=112, avg= 3.09, stdev= 1.06 00:28:08.738 clat (usec): min=1489, max=11853, avg=6367.10, stdev=503.60 00:28:08.738 lat (usec): min=1497, max=11856, avg=6370.18, stdev=503.56 00:28:08.738 clat percentiles (usec): 00:28:08.738 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 5997], 00:28:08.738 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:28:08.738 | 70.00th=[ 6587], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7111], 00:28:08.738 | 99.00th=[ 7439], 99.50th=[ 7570], 99.90th=[ 9241], 99.95th=[10814], 00:28:08.738 | 99.99th=[11863] 00:28:08.738 bw ( KiB/s): min=35584, max=35904, per=99.99%, avg=35792.00, stdev=151.23, samples=4 00:28:08.738 iops : min= 8896, max= 8976, avg=8948.00, stdev=37.81, samples=4 00:28:08.738 lat (msec) : 2=0.02%, 4=0.11%, 10=99.78%, 20=0.09% 00:28:08.738 cpu : usr=65.69%, sys=30.17%, ctx=43, majf=0, minf=5 00:28:08.738 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:08.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:08.738 issued rwts: total=17924,17951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:08.738 00:28:08.738 Run status group 0 (all jobs): 00:28:08.738 READ: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=70.0MiB (73.4MB), run=2006-2006msec 00:28:08.738 WRITE: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=70.1MiB (73.5MB), run=2006-2006msec 00:28:08.738 18:14:57 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:08.738 18:14:57 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:08.738 18:14:57 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:08.738 18:14:57 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:08.738 18:14:57 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:08.738 18:14:57 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:08.738 18:14:57 -- common/autotest_common.sh@1327 -- # shift 00:28:08.738 18:14:57 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:08.738 18:14:57 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:08.738 18:14:57 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:08.738 18:14:57 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:08.738 18:14:57 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:08.738 18:14:57 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:08.738 18:14:57 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:08.738 18:14:57 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:08.738 18:14:57 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:08.738 18:14:57 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:08.738 18:14:57 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:08.739 18:14:57 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:08.739 18:14:57 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:08.739 18:14:57 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:08.739 18:14:57 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:08.739 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:08.739 fio-3.35 00:28:08.739 Starting 1 thread 00:28:09.040 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.580 00:28:11.580 test: (groupid=0, jobs=1): err= 0: pid=3419979: Mon Apr 15 18:14:59 2024 00:28:11.580 read: IOPS=7183, BW=112MiB/s (118MB/s)(225MiB/2007msec) 00:28:11.580 slat (usec): min=3, max=135, avg= 5.37, stdev= 2.48 00:28:11.580 clat (usec): min=2681, max=19220, avg=10739.51, stdev=2420.55 00:28:11.580 lat (usec): min=2687, max=19226, avg=10744.88, stdev=2420.67 00:28:11.580 clat percentiles (usec): 00:28:11.580 | 1.00th=[ 5669], 5.00th=[ 6980], 10.00th=[ 7570], 20.00th=[ 8455], 00:28:11.580 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[10814], 60.00th=[11469], 00:28:11.580 | 70.00th=[12256], 80.00th=[13042], 90.00th=[13566], 95.00th=[14353], 00:28:11.580 | 99.00th=[16581], 99.50th=[17433], 99.90th=[18744], 99.95th=[18744], 00:28:11.580 | 99.99th=[19268] 00:28:11.580 bw ( KiB/s): min=50240, max=67800, per=50.19%, avg=57694.00, stdev=7453.00, samples=4 00:28:11.580 iops : min= 3140, max= 4237, avg=3605.75, stdev=465.59, samples=4 00:28:11.580 write: IOPS=4252, BW=66.4MiB/s (69.7MB/s)(118MiB/1777msec); 0 zone resets 00:28:11.580 slat (usec): min=39, max=193, avg=47.24, stdev= 7.04 00:28:11.580 clat (usec): min=4767, max=20859, avg=12549.62, stdev=2020.58 00:28:11.580 lat (usec): min=4807, max=20909, avg=12596.86, stdev=2020.37 00:28:11.580 clat percentiles (usec): 00:28:11.580 | 1.00th=[ 8291], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10814], 00:28:11.580 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12387], 60.00th=[13042], 00:28:11.580 | 70.00th=[13566], 80.00th=[14353], 90.00th=[15139], 95.00th=[15926], 00:28:11.580 | 99.00th=[17433], 99.50th=[18482], 99.90th=[19268], 99.95th=[19530], 00:28:11.580 | 99.99th=[20841] 00:28:11.580 bw ( KiB/s): min=52352, max=71249, per=88.17%, avg=59996.25, stdev=8250.05, samples=4 00:28:11.580 iops : min= 3272, max= 4453, avg=3749.75, stdev=515.60, samples=4 00:28:11.580 lat (msec) : 4=0.06%, 10=29.34%, 20=70.60%, 50=0.01% 00:28:11.580 cpu : usr=82.85%, sys=15.95%, ctx=14, majf=0, minf=23 00:28:11.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:11.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:11.580 issued rwts: total=14418,7557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:11.580 00:28:11.580 Run status group 0 (all jobs): 00:28:11.580 READ: bw=112MiB/s (118MB/s), 112MiB/s-112MiB/s (118MB/s-118MB/s), io=225MiB (236MB), run=2007-2007msec 00:28:11.580 WRITE: bw=66.4MiB/s (69.7MB/s), 66.4MiB/s-66.4MiB/s (69.7MB/s-69.7MB/s), io=118MiB (124MB), run=1777-1777msec 00:28:11.580 18:14:59 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:11.580 18:14:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.580 18:14:59 -- common/autotest_common.sh@10 -- # set +x 00:28:11.580 18:14:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.580 18:14:59 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:28:11.580 18:14:59 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:28:11.580 18:14:59 -- host/fio.sh@49 -- # get_nvme_bdfs 00:28:11.580 18:14:59 -- common/autotest_common.sh@1499 -- # bdfs=() 00:28:11.580 18:14:59 -- common/autotest_common.sh@1499 -- # local bdfs 00:28:11.580 18:14:59 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:11.580 18:14:59 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:11.580 18:14:59 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:28:11.580 18:15:00 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:28:11.580 18:15:00 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:82:00.0 00:28:11.580 18:15:00 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 -i 10.0.0.2 00:28:11.580 18:15:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.580 18:15:00 -- common/autotest_common.sh@10 -- # set +x 00:28:14.113 Nvme0n1 00:28:14.113 18:15:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.113 18:15:02 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:28:14.113 18:15:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.113 18:15:02 -- common/autotest_common.sh@10 -- # set +x 00:28:16.646 18:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.646 18:15:05 -- host/fio.sh@51 -- # ls_guid=6658b04d-2dd3-4012-bc8c-50f338ecaf62 00:28:16.646 18:15:05 -- host/fio.sh@52 -- # get_lvs_free_mb 6658b04d-2dd3-4012-bc8c-50f338ecaf62 00:28:16.646 18:15:05 -- common/autotest_common.sh@1350 -- # local lvs_uuid=6658b04d-2dd3-4012-bc8c-50f338ecaf62 00:28:16.646 18:15:05 -- common/autotest_common.sh@1351 -- # local lvs_info 00:28:16.646 18:15:05 -- common/autotest_common.sh@1352 -- # local fc 00:28:16.646 18:15:05 -- common/autotest_common.sh@1353 -- # local cs 00:28:16.646 18:15:05 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:28:16.646 18:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.646 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:28:16.646 18:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.646 18:15:05 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:28:16.646 { 00:28:16.647 "uuid": "6658b04d-2dd3-4012-bc8c-50f338ecaf62", 00:28:16.647 "name": "lvs_0", 00:28:16.647 "base_bdev": "Nvme0n1", 00:28:16.647 "total_data_clusters": 930, 00:28:16.647 "free_clusters": 930, 00:28:16.647 "block_size": 512, 00:28:16.647 "cluster_size": 1073741824 00:28:16.647 } 00:28:16.647 ]' 00:28:16.647 18:15:05 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="6658b04d-2dd3-4012-bc8c-50f338ecaf62") .free_clusters' 00:28:16.906 18:15:05 -- common/autotest_common.sh@1355 -- # fc=930 00:28:16.906 18:15:05 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="6658b04d-2dd3-4012-bc8c-50f338ecaf62") .cluster_size' 00:28:16.906 18:15:05 -- common/autotest_common.sh@1356 -- # cs=1073741824 00:28:16.906 18:15:05 -- common/autotest_common.sh@1359 -- # free_mb=952320 00:28:16.906 18:15:05 -- common/autotest_common.sh@1360 -- # echo 952320 00:28:16.906 952320 00:28:16.906 18:15:05 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 952320 00:28:16.906 18:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.906 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:28:16.906 eb081abc-089c-49da-a326-7ab740d2f3c4 00:28:16.906 18:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.906 18:15:05 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:28:16.906 18:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.906 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:28:16.906 18:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.906 18:15:05 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:28:16.906 18:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.906 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:28:16.906 18:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.906 18:15:05 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:16.906 18:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.906 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:28:16.906 18:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.906 18:15:05 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:16.906 18:15:05 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:16.906 18:15:05 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:16.906 18:15:05 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:16.906 18:15:05 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:16.906 18:15:05 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:16.906 18:15:05 -- common/autotest_common.sh@1327 -- # shift 00:28:16.906 18:15:05 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:16.906 18:15:05 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:17.164 18:15:05 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:17.164 18:15:05 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:17.164 18:15:05 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:17.164 18:15:05 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:17.164 18:15:05 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:17.164 18:15:05 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:17.164 18:15:05 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:17.164 18:15:05 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:17.164 18:15:05 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:17.164 18:15:05 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:17.164 18:15:05 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:17.164 18:15:05 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:17.164 18:15:05 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:17.421 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:17.421 fio-3.35 00:28:17.421 Starting 1 thread 00:28:17.421 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.953 00:28:19.953 test: (groupid=0, jobs=1): err= 0: pid=3421103: Mon Apr 15 18:15:08 2024 00:28:19.953 read: IOPS=6110, BW=23.9MiB/s (25.0MB/s)(47.9MiB/2007msec) 00:28:19.953 slat (usec): min=2, max=144, avg= 3.93, stdev= 2.44 00:28:19.953 clat (usec): min=899, max=171446, avg=11540.92, stdev=11597.33 00:28:19.953 lat (usec): min=903, max=171483, avg=11544.84, stdev=11597.58 00:28:19.953 clat percentiles (msec): 00:28:19.953 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:28:19.953 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:28:19.953 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:28:19.953 | 99.00th=[ 13], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:28:19.953 | 99.99th=[ 171] 00:28:19.953 bw ( KiB/s): min=17336, max=27064, per=99.74%, avg=24378.00, stdev=4700.83, samples=4 00:28:19.953 iops : min= 4334, max= 6766, avg=6094.50, stdev=1175.21, samples=4 00:28:19.953 write: IOPS=6087, BW=23.8MiB/s (24.9MB/s)(47.7MiB/2007msec); 0 zone resets 00:28:19.953 slat (usec): min=2, max=145, avg= 4.07, stdev= 2.16 00:28:19.953 clat (usec): min=385, max=168935, avg=9323.44, stdev=10852.37 00:28:19.953 lat (usec): min=389, max=168944, avg=9327.51, stdev=10852.66 00:28:19.953 clat percentiles (msec): 00:28:19.953 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:28:19.953 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:28:19.953 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:28:19.953 | 99.00th=[ 11], 99.50th=[ 14], 99.90th=[ 169], 99.95th=[ 169], 00:28:19.953 | 99.99th=[ 169] 00:28:19.953 bw ( KiB/s): min=18344, max=26432, per=99.91%, avg=24330.00, stdev=3992.15, samples=4 00:28:19.953 iops : min= 4586, max= 6608, avg=6082.50, stdev=998.04, samples=4 00:28:19.953 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:28:19.953 lat (msec) : 2=0.03%, 4=0.11%, 10=58.38%, 20=40.95%, 250=0.52% 00:28:19.953 cpu : usr=64.56%, sys=31.36%, ctx=59, majf=0, minf=5 00:28:19.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:28:19.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:19.953 issued rwts: total=12263,12218,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:19.953 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:19.953 00:28:19.953 Run status group 0 (all jobs): 00:28:19.953 READ: bw=23.9MiB/s (25.0MB/s), 23.9MiB/s-23.9MiB/s (25.0MB/s-25.0MB/s), io=47.9MiB (50.2MB), run=2007-2007msec 00:28:19.953 WRITE: bw=23.8MiB/s (24.9MB/s), 23.8MiB/s-23.8MiB/s (24.9MB/s-24.9MB/s), io=47.7MiB (50.0MB), run=2007-2007msec 00:28:19.953 18:15:08 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:19.953 18:15:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.953 18:15:08 -- common/autotest_common.sh@10 -- # set +x 00:28:19.953 18:15:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.953 18:15:08 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:28:19.953 18:15:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.953 18:15:08 -- common/autotest_common.sh@10 -- # set +x 00:28:20.521 18:15:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.521 18:15:09 -- host/fio.sh@62 -- # ls_nested_guid=6af5deae-13c8-4e1b-808b-782d59de589b 00:28:20.521 18:15:09 -- host/fio.sh@63 -- # get_lvs_free_mb 6af5deae-13c8-4e1b-808b-782d59de589b 00:28:20.521 18:15:09 -- common/autotest_common.sh@1350 -- # local lvs_uuid=6af5deae-13c8-4e1b-808b-782d59de589b 00:28:20.521 18:15:09 -- common/autotest_common.sh@1351 -- # local lvs_info 00:28:20.521 18:15:09 -- common/autotest_common.sh@1352 -- # local fc 00:28:20.521 18:15:09 -- common/autotest_common.sh@1353 -- # local cs 00:28:20.521 18:15:09 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:28:20.521 18:15:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.521 18:15:09 -- common/autotest_common.sh@10 -- # set +x 00:28:20.521 18:15:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.521 18:15:09 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:28:20.521 { 00:28:20.521 "uuid": "6658b04d-2dd3-4012-bc8c-50f338ecaf62", 00:28:20.521 "name": "lvs_0", 00:28:20.521 "base_bdev": "Nvme0n1", 00:28:20.521 "total_data_clusters": 930, 00:28:20.521 "free_clusters": 0, 00:28:20.521 "block_size": 512, 00:28:20.521 "cluster_size": 1073741824 00:28:20.521 }, 00:28:20.521 { 00:28:20.521 "uuid": "6af5deae-13c8-4e1b-808b-782d59de589b", 00:28:20.521 "name": "lvs_n_0", 00:28:20.521 "base_bdev": "eb081abc-089c-49da-a326-7ab740d2f3c4", 00:28:20.521 "total_data_clusters": 237847, 00:28:20.521 "free_clusters": 237847, 00:28:20.521 "block_size": 512, 00:28:20.521 "cluster_size": 4194304 00:28:20.521 } 00:28:20.521 ]' 00:28:20.521 18:15:09 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="6af5deae-13c8-4e1b-808b-782d59de589b") .free_clusters' 00:28:20.521 18:15:09 -- common/autotest_common.sh@1355 -- # fc=237847 00:28:20.521 18:15:09 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="6af5deae-13c8-4e1b-808b-782d59de589b") .cluster_size' 00:28:20.521 18:15:09 -- common/autotest_common.sh@1356 -- # cs=4194304 00:28:20.521 18:15:09 -- common/autotest_common.sh@1359 -- # free_mb=951388 00:28:20.521 18:15:09 -- common/autotest_common.sh@1360 -- # echo 951388 00:28:20.521 951388 00:28:20.521 18:15:09 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:28:20.521 18:15:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.521 18:15:09 -- common/autotest_common.sh@10 -- # set +x 00:28:21.088 5fcce4ab-f749-4174-b915-b0a11bdecf7d 00:28:21.088 18:15:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.088 18:15:09 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:28:21.088 18:15:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.088 18:15:09 -- common/autotest_common.sh@10 -- # set +x 00:28:21.088 18:15:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.088 18:15:09 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:28:21.088 18:15:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.088 18:15:09 -- common/autotest_common.sh@10 -- # set +x 00:28:21.088 18:15:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.088 18:15:09 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:28:21.088 18:15:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.088 18:15:09 -- common/autotest_common.sh@10 -- # set +x 00:28:21.088 18:15:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.088 18:15:09 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:21.088 18:15:09 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:21.088 18:15:09 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:21.088 18:15:09 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:21.088 18:15:09 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:21.088 18:15:09 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:21.088 18:15:09 -- common/autotest_common.sh@1327 -- # shift 00:28:21.088 18:15:09 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:21.088 18:15:09 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:21.088 18:15:09 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:21.088 18:15:09 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:21.088 18:15:09 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:21.088 18:15:09 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:21.088 18:15:09 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:21.088 18:15:09 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:21.088 18:15:09 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:21.088 18:15:09 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:21.088 18:15:09 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:21.088 18:15:09 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:21.088 18:15:09 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:21.088 18:15:09 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:21.088 18:15:09 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:21.345 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:21.345 fio-3.35 00:28:21.345 Starting 1 thread 00:28:21.345 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.882 00:28:23.882 test: (groupid=0, jobs=1): err= 0: pid=3422195: Mon Apr 15 18:15:12 2024 00:28:23.882 read: IOPS=5743, BW=22.4MiB/s (23.5MB/s)(45.1MiB/2009msec) 00:28:23.882 slat (usec): min=2, max=163, avg= 4.13, stdev= 3.09 00:28:23.882 clat (usec): min=4565, max=20048, avg=12280.58, stdev=993.79 00:28:23.882 lat (usec): min=4570, max=20051, avg=12284.71, stdev=993.67 00:28:23.882 clat percentiles (usec): 00:28:23.882 | 1.00th=[10028], 5.00th=[10814], 10.00th=[11076], 20.00th=[11469], 00:28:23.882 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:28:23.882 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13435], 95.00th=[13829], 00:28:23.882 | 99.00th=[14484], 99.50th=[14746], 99.90th=[17695], 99.95th=[18744], 00:28:23.882 | 99.99th=[20055] 00:28:23.882 bw ( KiB/s): min=21592, max=23536, per=99.91%, avg=22954.00, stdev=913.34, samples=4 00:28:23.882 iops : min= 5398, max= 5884, avg=5738.50, stdev=228.34, samples=4 00:28:23.882 write: IOPS=5733, BW=22.4MiB/s (23.5MB/s)(45.0MiB/2009msec); 0 zone resets 00:28:23.882 slat (usec): min=2, max=126, avg= 4.20, stdev= 2.58 00:28:23.882 clat (usec): min=2251, max=19092, avg=9893.25, stdev=936.93 00:28:23.882 lat (usec): min=2258, max=19095, avg=9897.45, stdev=936.97 00:28:23.882 clat percentiles (usec): 00:28:23.882 | 1.00th=[ 7832], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:28:23.882 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:28:23.882 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:28:23.882 | 99.00th=[11863], 99.50th=[12518], 99.90th=[17433], 99.95th=[18744], 00:28:23.882 | 99.99th=[19006] 00:28:23.882 bw ( KiB/s): min=22680, max=23208, per=99.88%, avg=22908.00, stdev=269.76, samples=4 00:28:23.882 iops : min= 5670, max= 5802, avg=5727.00, stdev=67.44, samples=4 00:28:23.882 lat (msec) : 4=0.05%, 10=28.08%, 20=71.87%, 50=0.01% 00:28:23.882 cpu : usr=60.56%, sys=35.81%, ctx=63, majf=0, minf=5 00:28:23.882 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:28:23.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:23.882 issued rwts: total=11539,11519,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.882 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:23.882 00:28:23.882 Run status group 0 (all jobs): 00:28:23.882 READ: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.1MiB (47.3MB), run=2009-2009msec 00:28:23.882 WRITE: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.0MiB (47.2MB), run=2009-2009msec 00:28:23.882 18:15:12 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:23.882 18:15:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.882 18:15:12 -- common/autotest_common.sh@10 -- # set +x 00:28:23.882 18:15:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.882 18:15:12 -- host/fio.sh@72 -- # sync 00:28:23.882 18:15:12 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:28:23.882 18:15:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.882 18:15:12 -- common/autotest_common.sh@10 -- # set +x 00:28:27.174 18:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.174 18:15:16 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:28:27.174 18:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.174 18:15:16 -- common/autotest_common.sh@10 -- # set +x 00:28:27.174 18:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.174 18:15:16 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:28:27.174 18:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.174 18:15:16 -- common/autotest_common.sh@10 -- # set +x 00:28:30.466 18:15:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.466 18:15:18 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:28:30.466 18:15:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.466 18:15:18 -- common/autotest_common.sh@10 -- # set +x 00:28:30.466 18:15:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.466 18:15:18 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:28:30.466 18:15:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.466 18:15:18 -- common/autotest_common.sh@10 -- # set +x 00:28:31.876 18:15:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:31.876 18:15:20 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:28:31.876 18:15:20 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:28:31.876 18:15:20 -- host/fio.sh@84 -- # nvmftestfini 00:28:31.876 18:15:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:31.876 18:15:20 -- nvmf/common.sh@117 -- # sync 00:28:31.876 18:15:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:31.876 18:15:20 -- nvmf/common.sh@120 -- # set +e 00:28:31.876 18:15:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:31.876 18:15:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:31.876 rmmod nvme_tcp 00:28:31.876 rmmod nvme_fabrics 00:28:31.876 rmmod nvme_keyring 00:28:31.876 18:15:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:31.876 18:15:20 -- nvmf/common.sh@124 -- # set -e 00:28:31.876 18:15:20 -- nvmf/common.sh@125 -- # return 0 00:28:31.876 18:15:20 -- nvmf/common.sh@478 -- # '[' -n 3419434 ']' 00:28:31.876 18:15:20 -- nvmf/common.sh@479 -- # killprocess 3419434 00:28:31.876 18:15:20 -- common/autotest_common.sh@936 -- # '[' -z 3419434 ']' 00:28:31.876 18:15:20 -- common/autotest_common.sh@940 -- # kill -0 3419434 00:28:31.876 18:15:20 -- common/autotest_common.sh@941 -- # uname 00:28:31.876 18:15:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:31.876 18:15:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3419434 00:28:31.876 18:15:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:31.876 18:15:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:31.876 18:15:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3419434' 00:28:31.876 killing process with pid 3419434 00:28:31.876 18:15:20 -- common/autotest_common.sh@955 -- # kill 3419434 00:28:31.876 18:15:20 -- common/autotest_common.sh@960 -- # wait 3419434 00:28:31.876 18:15:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:31.876 18:15:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:31.877 18:15:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:31.877 18:15:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:31.877 18:15:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:31.877 18:15:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.877 18:15:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.877 18:15:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.409 18:15:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:34.409 00:28:34.409 real 0m30.880s 00:28:34.409 user 1m51.544s 00:28:34.409 sys 0m5.926s 00:28:34.409 18:15:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:34.409 18:15:22 -- common/autotest_common.sh@10 -- # set +x 00:28:34.409 ************************************ 00:28:34.409 END TEST nvmf_fio_host 00:28:34.409 ************************************ 00:28:34.409 18:15:22 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:34.409 18:15:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:34.409 18:15:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:34.409 18:15:22 -- common/autotest_common.sh@10 -- # set +x 00:28:34.409 ************************************ 00:28:34.409 START TEST nvmf_failover 00:28:34.409 ************************************ 00:28:34.409 18:15:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:34.409 * Looking for test storage... 00:28:34.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:34.409 18:15:23 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.409 18:15:23 -- nvmf/common.sh@7 -- # uname -s 00:28:34.409 18:15:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.409 18:15:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.409 18:15:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.409 18:15:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.409 18:15:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.409 18:15:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.409 18:15:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.409 18:15:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.409 18:15:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.409 18:15:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.409 18:15:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:34.409 18:15:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:34.409 18:15:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.409 18:15:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.409 18:15:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.409 18:15:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:34.409 18:15:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:34.409 18:15:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.409 18:15:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.409 18:15:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.410 18:15:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.410 18:15:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.410 18:15:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.410 18:15:23 -- paths/export.sh@5 -- # export PATH 00:28:34.410 18:15:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.410 18:15:23 -- nvmf/common.sh@47 -- # : 0 00:28:34.410 18:15:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:34.410 18:15:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:34.410 18:15:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:34.410 18:15:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.410 18:15:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.410 18:15:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:34.410 18:15:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:34.410 18:15:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:34.410 18:15:23 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:34.410 18:15:23 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:34.410 18:15:23 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:34.410 18:15:23 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:34.410 18:15:23 -- host/failover.sh@18 -- # nvmftestinit 00:28:34.410 18:15:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:34.410 18:15:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.410 18:15:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:34.410 18:15:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:34.410 18:15:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:34.410 18:15:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.410 18:15:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:34.410 18:15:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.410 18:15:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:34.410 18:15:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:34.410 18:15:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:34.410 18:15:23 -- common/autotest_common.sh@10 -- # set +x 00:28:36.318 18:15:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:36.318 18:15:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:36.318 18:15:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:36.318 18:15:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:36.318 18:15:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:36.318 18:15:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:36.318 18:15:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:36.318 18:15:25 -- nvmf/common.sh@295 -- # net_devs=() 00:28:36.318 18:15:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:36.318 18:15:25 -- nvmf/common.sh@296 -- # e810=() 00:28:36.318 18:15:25 -- nvmf/common.sh@296 -- # local -ga e810 00:28:36.318 18:15:25 -- nvmf/common.sh@297 -- # x722=() 00:28:36.318 18:15:25 -- nvmf/common.sh@297 -- # local -ga x722 00:28:36.318 18:15:25 -- nvmf/common.sh@298 -- # mlx=() 00:28:36.318 18:15:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:36.318 18:15:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.318 18:15:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.318 18:15:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.318 18:15:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.318 18:15:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.318 18:15:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.318 18:15:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.318 18:15:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.318 18:15:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.318 18:15:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.318 18:15:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.318 18:15:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:36.318 18:15:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:36.318 18:15:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:36.318 18:15:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:36.318 18:15:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:36.318 18:15:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:36.318 18:15:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:36.318 18:15:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:36.318 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:36.318 18:15:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:36.318 18:15:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:36.318 18:15:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.318 18:15:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.318 18:15:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:36.318 18:15:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:36.318 18:15:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:36.318 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:36.318 18:15:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:36.318 18:15:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:36.318 18:15:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.318 18:15:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.318 18:15:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:36.318 18:15:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:36.318 18:15:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:36.318 18:15:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:36.318 18:15:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:36.318 18:15:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.318 18:15:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:36.318 18:15:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.318 18:15:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:36.318 Found net devices under 0000:84:00.0: cvl_0_0 00:28:36.318 18:15:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.318 18:15:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:36.318 18:15:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.318 18:15:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:36.318 18:15:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.318 18:15:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:36.318 Found net devices under 0000:84:00.1: cvl_0_1 00:28:36.318 18:15:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.318 18:15:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:36.318 18:15:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:36.318 18:15:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:36.318 18:15:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:36.318 18:15:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:36.318 18:15:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.318 18:15:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.318 18:15:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.318 18:15:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:36.318 18:15:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.318 18:15:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.318 18:15:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:36.318 18:15:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.318 18:15:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.318 18:15:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:36.318 18:15:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:36.318 18:15:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.318 18:15:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.318 18:15:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.318 18:15:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.577 18:15:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:36.577 18:15:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.577 18:15:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.577 18:15:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.577 18:15:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:36.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:28:36.577 00:28:36.577 --- 10.0.0.2 ping statistics --- 00:28:36.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.577 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:28:36.577 18:15:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:28:36.577 00:28:36.577 --- 10.0.0.1 ping statistics --- 00:28:36.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.577 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:28:36.577 18:15:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.577 18:15:25 -- nvmf/common.sh@411 -- # return 0 00:28:36.577 18:15:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:36.577 18:15:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.577 18:15:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:36.577 18:15:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:36.577 18:15:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.577 18:15:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:36.577 18:15:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:36.577 18:15:25 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:36.577 18:15:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:36.577 18:15:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:36.577 18:15:25 -- common/autotest_common.sh@10 -- # set +x 00:28:36.577 18:15:25 -- nvmf/common.sh@470 -- # nvmfpid=3425325 00:28:36.577 18:15:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:36.577 18:15:25 -- nvmf/common.sh@471 -- # waitforlisten 3425325 00:28:36.577 18:15:25 -- common/autotest_common.sh@817 -- # '[' -z 3425325 ']' 00:28:36.577 18:15:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.577 18:15:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:36.577 18:15:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.577 18:15:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:36.577 18:15:25 -- common/autotest_common.sh@10 -- # set +x 00:28:36.577 [2024-04-15 18:15:25.417917] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:28:36.577 [2024-04-15 18:15:25.418005] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.577 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.577 [2024-04-15 18:15:25.494640] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:36.836 [2024-04-15 18:15:25.588038] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.836 [2024-04-15 18:15:25.588112] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.836 [2024-04-15 18:15:25.588130] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.836 [2024-04-15 18:15:25.588144] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.836 [2024-04-15 18:15:25.588157] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.836 [2024-04-15 18:15:25.588248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:36.836 [2024-04-15 18:15:25.588301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:36.836 [2024-04-15 18:15:25.588304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.836 18:15:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:36.836 18:15:25 -- common/autotest_common.sh@850 -- # return 0 00:28:36.836 18:15:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:36.836 18:15:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:36.836 18:15:25 -- common/autotest_common.sh@10 -- # set +x 00:28:36.836 18:15:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:36.836 18:15:25 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:37.406 [2024-04-15 18:15:26.257274] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.406 18:15:26 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:37.975 Malloc0 00:28:37.975 18:15:26 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:38.234 18:15:26 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:38.492 18:15:27 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.750 [2024-04-15 18:15:27.484024] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.750 18:15:27 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:39.009 [2024-04-15 18:15:27.796963] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:39.009 18:15:27 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:39.267 [2024-04-15 18:15:28.130032] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:39.267 18:15:28 -- host/failover.sh@31 -- # bdevperf_pid=3425619 00:28:39.267 18:15:28 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:39.267 18:15:28 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:39.267 18:15:28 -- host/failover.sh@34 -- # waitforlisten 3425619 /var/tmp/bdevperf.sock 00:28:39.267 18:15:28 -- common/autotest_common.sh@817 -- # '[' -z 3425619 ']' 00:28:39.267 18:15:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:39.267 18:15:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:39.267 18:15:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:39.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:39.267 18:15:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:39.267 18:15:28 -- common/autotest_common.sh@10 -- # set +x 00:28:39.835 18:15:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:39.835 18:15:28 -- common/autotest_common.sh@850 -- # return 0 00:28:39.835 18:15:28 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:40.401 NVMe0n1 00:28:40.401 18:15:29 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:40.970 00:28:40.970 18:15:29 -- host/failover.sh@39 -- # run_test_pid=3425757 00:28:40.970 18:15:29 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:40.970 18:15:29 -- host/failover.sh@41 -- # sleep 1 00:28:41.907 18:15:30 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:42.166 [2024-04-15 18:15:30.916640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7a3d0 is same with the state(5) to be set 00:28:42.166 [2024-04-15 18:15:30.916728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7a3d0 is same with the state(5) to be set 00:28:42.166 [2024-04-15 18:15:30.916745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7a3d0 is same with the state(5) to be set 00:28:42.166 [2024-04-15 18:15:30.916757] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7a3d0 is same with the state(5) to be set 00:28:42.166 [2024-04-15 18:15:30.916770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7a3d0 is same with the state(5) to be set 00:28:42.166 [2024-04-15 18:15:30.916783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7a3d0 is same with the state(5) to be set 00:28:42.166 [2024-04-15 18:15:30.916795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7a3d0 is same with the state(5) to be set 00:28:42.166 [2024-04-15 18:15:30.916808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7a3d0 is same with the state(5) to be set 00:28:42.166 [2024-04-15 18:15:30.916820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7a3d0 is same with the state(5) to be set 00:28:42.166 [2024-04-15 18:15:30.916842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7a3d0 is same with the state(5) to be set 00:28:42.166 [2024-04-15 18:15:30.916856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7a3d0 is same with the state(5) to be set 00:28:42.166 [2024-04-15 18:15:30.916868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7a3d0 is same with the state(5) to be set 00:28:42.166 [2024-04-15 18:15:30.916880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7a3d0 is same with the state(5) to be set 00:28:42.166 [2024-04-15 18:15:30.916893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7a3d0 is same with the state(5) to be set 00:28:42.166 [2024-04-15 18:15:30.916906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7a3d0 is same with the state(5) to be set 00:28:42.166 [2024-04-15 18:15:30.916918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7a3d0 is same with the state(5) to be set 00:28:42.166 18:15:30 -- host/failover.sh@45 -- # sleep 3 00:28:45.455 18:15:33 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:45.713 00:28:45.713 18:15:34 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:45.972 [2024-04-15 18:15:34.769320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.972 [2024-04-15 18:15:34.769402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.972 [2024-04-15 18:15:34.769418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.972 [2024-04-15 18:15:34.769430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.972 [2024-04-15 18:15:34.769443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.972 [2024-04-15 18:15:34.769456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.972 [2024-04-15 18:15:34.769469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.972 [2024-04-15 18:15:34.769481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.972 [2024-04-15 18:15:34.769493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.972 [2024-04-15 18:15:34.769505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.972 [2024-04-15 18:15:34.769517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.972 [2024-04-15 18:15:34.769530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.972 [2024-04-15 18:15:34.769542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.972 [2024-04-15 18:15:34.769554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.972 [2024-04-15 18:15:34.769566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.972 [2024-04-15 18:15:34.769580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.972 [2024-04-15 18:15:34.769593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.972 [2024-04-15 18:15:34.769605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.972 [2024-04-15 18:15:34.769627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769654] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769979] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.769991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770099] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770356] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770369] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770405] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770417] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770610] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.973 [2024-04-15 18:15:34.770698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.974 [2024-04-15 18:15:34.770710] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b280 is same with the state(5) to be set 00:28:45.974 18:15:34 -- host/failover.sh@50 -- # sleep 3 00:28:49.259 18:15:37 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.259 [2024-04-15 18:15:38.109176] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.259 18:15:38 -- host/failover.sh@55 -- # sleep 1 00:28:50.241 18:15:39 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:50.500 [2024-04-15 18:15:39.412614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412693] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412804] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412942] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.412994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.500 [2024-04-15 18:15:39.413007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413336] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413348] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413424] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413548] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413610] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413921] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.413998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.414011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.414023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.414051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.414071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.414084] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.414097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.414110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.414123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.414136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.501 [2024-04-15 18:15:39.414149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.502 [2024-04-15 18:15:39.414162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.502 [2024-04-15 18:15:39.414174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.502 [2024-04-15 18:15:39.414187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.502 [2024-04-15 18:15:39.414200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.502 [2024-04-15 18:15:39.414213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34700 is same with the state(5) to be set 00:28:50.502 18:15:39 -- host/failover.sh@59 -- # wait 3425757 00:28:57.072 0 00:28:57.072 18:15:44 -- host/failover.sh@61 -- # killprocess 3425619 00:28:57.072 18:15:44 -- common/autotest_common.sh@936 -- # '[' -z 3425619 ']' 00:28:57.072 18:15:44 -- common/autotest_common.sh@940 -- # kill -0 3425619 00:28:57.072 18:15:44 -- common/autotest_common.sh@941 -- # uname 00:28:57.072 18:15:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:57.072 18:15:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3425619 00:28:57.072 18:15:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:57.072 18:15:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:57.072 18:15:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3425619' 00:28:57.072 killing process with pid 3425619 00:28:57.072 18:15:44 -- common/autotest_common.sh@955 -- # kill 3425619 00:28:57.072 18:15:44 -- common/autotest_common.sh@960 -- # wait 3425619 00:28:57.072 18:15:45 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:57.072 [2024-04-15 18:15:28.198939] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:28:57.072 [2024-04-15 18:15:28.199040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3425619 ] 00:28:57.072 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.073 [2024-04-15 18:15:28.272123] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.073 [2024-04-15 18:15:28.361522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.073 Running I/O for 15 seconds... 00:28:57.073 [2024-04-15 18:15:30.918184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.918231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.918279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.918313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.918360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.918392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.918422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.918452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.918482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.918513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.918543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.918575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.918615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.918645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.918675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.918703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.918733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.918762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.073 [2024-04-15 18:15:30.918792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.073 [2024-04-15 18:15:30.918822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.073 [2024-04-15 18:15:30.918852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.073 [2024-04-15 18:15:30.918882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.073 [2024-04-15 18:15:30.918911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.073 [2024-04-15 18:15:30.918942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.073 [2024-04-15 18:15:30.918976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.918992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.073 [2024-04-15 18:15:30.919007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.919039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.073 [2024-04-15 18:15:30.919054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.919096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.919113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.919129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.919145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.919161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.919176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.919192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.919207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.919223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.919238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.919254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.919270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.919286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.919301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.919317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.073 [2024-04-15 18:15:30.919332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.919349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.073 [2024-04-15 18:15:30.919364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.919396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.073 [2024-04-15 18:15:30.919411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.919430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.073 [2024-04-15 18:15:30.919446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.919462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.073 [2024-04-15 18:15:30.919476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.073 [2024-04-15 18:15:30.919492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.073 [2024-04-15 18:15:30.919507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.919523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.919537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.919553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.919567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.919583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.074 [2024-04-15 18:15:30.919598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.919614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.074 [2024-04-15 18:15:30.919628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.919644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.074 [2024-04-15 18:15:30.919659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.919675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.074 [2024-04-15 18:15:30.919689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.919705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.074 [2024-04-15 18:15:30.919719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.919736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.074 [2024-04-15 18:15:30.919751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.919766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.074 [2024-04-15 18:15:30.919781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.919796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.074 [2024-04-15 18:15:30.919814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.919831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.074 [2024-04-15 18:15:30.919845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.919861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.074 [2024-04-15 18:15:30.919876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.919891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.074 [2024-04-15 18:15:30.919905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.919921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.074 [2024-04-15 18:15:30.919935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.919950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.074 [2024-04-15 18:15:30.919964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.919980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.074 [2024-04-15 18:15:30.919994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.074 [2024-04-15 18:15:30.920023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.074 [2024-04-15 18:15:30.920078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.074 [2024-04-15 18:15:30.920110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.074 [2024-04-15 18:15:30.920758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.074 [2024-04-15 18:15:30.920772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.920787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.920801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.920817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.920831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.920847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.920862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.920879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.920893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.920910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.920925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.920940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.920954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.920969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.920984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.920999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.075 [2024-04-15 18:15:30.921655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.075 [2024-04-15 18:15:30.921709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78856 len:8 PRP1 0x0 PRP2 0x0 00:28:57.075 [2024-04-15 18:15:30.921723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.075 [2024-04-15 18:15:30.921754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.075 [2024-04-15 18:15:30.921766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78864 len:8 PRP1 0x0 PRP2 0x0 00:28:57.075 [2024-04-15 18:15:30.921780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.075 [2024-04-15 18:15:30.921805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.075 [2024-04-15 18:15:30.921816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78872 len:8 PRP1 0x0 PRP2 0x0 00:28:57.075 [2024-04-15 18:15:30.921830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.075 [2024-04-15 18:15:30.921855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.075 [2024-04-15 18:15:30.921866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78880 len:8 PRP1 0x0 PRP2 0x0 00:28:57.075 [2024-04-15 18:15:30.921884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.075 [2024-04-15 18:15:30.921910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.075 [2024-04-15 18:15:30.921921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78888 len:8 PRP1 0x0 PRP2 0x0 00:28:57.075 [2024-04-15 18:15:30.921935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.075 [2024-04-15 18:15:30.921960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.075 [2024-04-15 18:15:30.921972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78896 len:8 PRP1 0x0 PRP2 0x0 00:28:57.075 [2024-04-15 18:15:30.921986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.921999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.075 [2024-04-15 18:15:30.922010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.075 [2024-04-15 18:15:30.922022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78904 len:8 PRP1 0x0 PRP2 0x0 00:28:57.075 [2024-04-15 18:15:30.922035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.075 [2024-04-15 18:15:30.922048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.075 [2024-04-15 18:15:30.922083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.075 [2024-04-15 18:15:30.922097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78912 len:8 PRP1 0x0 PRP2 0x0 00:28:57.075 [2024-04-15 18:15:30.922112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.922126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.076 [2024-04-15 18:15:30.922143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.076 [2024-04-15 18:15:30.922156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78920 len:8 PRP1 0x0 PRP2 0x0 00:28:57.076 [2024-04-15 18:15:30.922170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.922184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.076 [2024-04-15 18:15:30.922196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.076 [2024-04-15 18:15:30.922208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78928 len:8 PRP1 0x0 PRP2 0x0 00:28:57.076 [2024-04-15 18:15:30.922221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.922236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.076 [2024-04-15 18:15:30.922247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.076 [2024-04-15 18:15:30.922260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78936 len:8 PRP1 0x0 PRP2 0x0 00:28:57.076 [2024-04-15 18:15:30.922273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.922287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.076 [2024-04-15 18:15:30.922298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.076 [2024-04-15 18:15:30.922314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78944 len:8 PRP1 0x0 PRP2 0x0 00:28:57.076 [2024-04-15 18:15:30.922328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.922342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.076 [2024-04-15 18:15:30.922354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.076 [2024-04-15 18:15:30.922366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78952 len:8 PRP1 0x0 PRP2 0x0 00:28:57.076 [2024-04-15 18:15:30.922380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.922393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.076 [2024-04-15 18:15:30.922405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.076 [2024-04-15 18:15:30.922417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78960 len:8 PRP1 0x0 PRP2 0x0 00:28:57.076 [2024-04-15 18:15:30.922431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.922445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.076 [2024-04-15 18:15:30.922457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.076 [2024-04-15 18:15:30.922470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78968 len:8 PRP1 0x0 PRP2 0x0 00:28:57.076 [2024-04-15 18:15:30.922484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.922498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.076 [2024-04-15 18:15:30.922509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.076 [2024-04-15 18:15:30.922522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78288 len:8 PRP1 0x0 PRP2 0x0 00:28:57.076 [2024-04-15 18:15:30.922535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.922549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.076 [2024-04-15 18:15:30.922562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.076 [2024-04-15 18:15:30.922574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78296 len:8 PRP1 0x0 PRP2 0x0 00:28:57.076 [2024-04-15 18:15:30.922588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.922602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.076 [2024-04-15 18:15:30.922629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.076 [2024-04-15 18:15:30.922641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78304 len:8 PRP1 0x0 PRP2 0x0 00:28:57.076 [2024-04-15 18:15:30.922654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.922668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.076 [2024-04-15 18:15:30.922680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.076 [2024-04-15 18:15:30.922692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:28:57.076 [2024-04-15 18:15:30.922705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.922719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.076 [2024-04-15 18:15:30.922734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.076 [2024-04-15 18:15:30.922746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:28:57.076 [2024-04-15 18:15:30.922760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.922775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.076 [2024-04-15 18:15:30.922786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.076 [2024-04-15 18:15:30.922798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:28:57.076 [2024-04-15 18:15:30.922811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.922825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.076 [2024-04-15 18:15:30.922837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.076 [2024-04-15 18:15:30.922850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78336 len:8 PRP1 0x0 PRP2 0x0 00:28:57.076 [2024-04-15 18:15:30.922864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.922926] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf672d0 was disconnected and freed. reset controller. 00:28:57.076 [2024-04-15 18:15:30.922946] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:57.076 [2024-04-15 18:15:30.922982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.076 [2024-04-15 18:15:30.923000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.923016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.076 [2024-04-15 18:15:30.923030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.923045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.076 [2024-04-15 18:15:30.923082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.923101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.076 [2024-04-15 18:15:30.923116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:30.923130] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.076 [2024-04-15 18:15:30.923191] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48af0 (9): Bad file descriptor 00:28:57.076 [2024-04-15 18:15:30.926454] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.076 [2024-04-15 18:15:30.962479] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:57.076 [2024-04-15 18:15:34.771158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.076 [2024-04-15 18:15:34.771203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:34.771233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.076 [2024-04-15 18:15:34.771255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:34.771273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.076 [2024-04-15 18:15:34.771288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:34.771304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.076 [2024-04-15 18:15:34.771319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:34.771334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.076 [2024-04-15 18:15:34.771364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:34.771379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.076 [2024-04-15 18:15:34.771392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:34.771407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.076 [2024-04-15 18:15:34.771422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:34.771436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.076 [2024-04-15 18:15:34.771450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:34.771465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.076 [2024-04-15 18:15:34.771479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.076 [2024-04-15 18:15:34.771494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.771508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.771523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.771537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.771552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.771565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.771580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.771594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.771609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.771622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.771637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.771657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.771672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.771686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.771701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.771715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.771731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.771745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.771760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.771773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.771788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.771802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.771817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.771831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.771846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.771859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.771875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.771888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.771903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.771918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.771933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.771946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.771961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.771975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.771990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.772004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.772024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.772038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.772078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.772093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.772109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.772123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.772139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.772153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.772169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.772183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.772199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.772214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.772231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.772245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.772261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.772275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.772291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.772305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.772321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.772335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.772365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.772379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.772395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.077 [2024-04-15 18:15:34.772423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.772439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.077 [2024-04-15 18:15:34.772457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.772472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.077 [2024-04-15 18:15:34.772486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.772501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.077 [2024-04-15 18:15:34.772514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.772529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.077 [2024-04-15 18:15:34.772543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.077 [2024-04-15 18:15:34.772573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.772587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.772602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.772616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.772631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.772645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.772660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.772674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.772689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.772702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.772717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.772733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.772749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.772763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.772779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.772793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.772808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.772822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.772842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.772856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.772872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.772886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.772902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.772916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.772931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.772945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.772960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.772975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.772990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.078 [2024-04-15 18:15:34.773154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.078 [2024-04-15 18:15:34.773185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.078 [2024-04-15 18:15:34.773813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.078 [2024-04-15 18:15:34.773829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.079 [2024-04-15 18:15:34.773843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.773858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.079 [2024-04-15 18:15:34.773872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.773887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.079 [2024-04-15 18:15:34.773901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.773916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.079 [2024-04-15 18:15:34.773930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.773945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.079 [2024-04-15 18:15:34.773959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.773975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.079 [2024-04-15 18:15:34.773988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.079 [2024-04-15 18:15:34.774017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.079 [2024-04-15 18:15:34.774051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.079 [2024-04-15 18:15:34.774105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.079 [2024-04-15 18:15:34.774136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.079 [2024-04-15 18:15:34.774166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.079 [2024-04-15 18:15:34.774196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.079 [2024-04-15 18:15:34.774245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94528 len:8 PRP1 0x0 PRP2 0x0 00:28:57.079 [2024-04-15 18:15:34.774258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.079 [2024-04-15 18:15:34.774292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.079 [2024-04-15 18:15:34.774305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94536 len:8 PRP1 0x0 PRP2 0x0 00:28:57.079 [2024-04-15 18:15:34.774319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.079 [2024-04-15 18:15:34.774345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.079 [2024-04-15 18:15:34.774357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94544 len:8 PRP1 0x0 PRP2 0x0 00:28:57.079 [2024-04-15 18:15:34.774385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.079 [2024-04-15 18:15:34.774410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.079 [2024-04-15 18:15:34.774422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94552 len:8 PRP1 0x0 PRP2 0x0 00:28:57.079 [2024-04-15 18:15:34.774435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.079 [2024-04-15 18:15:34.774460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.079 [2024-04-15 18:15:34.774473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94560 len:8 PRP1 0x0 PRP2 0x0 00:28:57.079 [2024-04-15 18:15:34.774486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.079 [2024-04-15 18:15:34.774516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.079 [2024-04-15 18:15:34.774528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94568 len:8 PRP1 0x0 PRP2 0x0 00:28:57.079 [2024-04-15 18:15:34.774541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.079 [2024-04-15 18:15:34.774567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.079 [2024-04-15 18:15:34.774579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94576 len:8 PRP1 0x0 PRP2 0x0 00:28:57.079 [2024-04-15 18:15:34.774592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.079 [2024-04-15 18:15:34.774617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.079 [2024-04-15 18:15:34.774629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94584 len:8 PRP1 0x0 PRP2 0x0 00:28:57.079 [2024-04-15 18:15:34.774642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.079 [2024-04-15 18:15:34.774667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.079 [2024-04-15 18:15:34.774678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94592 len:8 PRP1 0x0 PRP2 0x0 00:28:57.079 [2024-04-15 18:15:34.774691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.079 [2024-04-15 18:15:34.774724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.079 [2024-04-15 18:15:34.774735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94600 len:8 PRP1 0x0 PRP2 0x0 00:28:57.079 [2024-04-15 18:15:34.774748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.079 [2024-04-15 18:15:34.774773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.079 [2024-04-15 18:15:34.774785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94608 len:8 PRP1 0x0 PRP2 0x0 00:28:57.079 [2024-04-15 18:15:34.774798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.079 [2024-04-15 18:15:34.774822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.079 [2024-04-15 18:15:34.774833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94616 len:8 PRP1 0x0 PRP2 0x0 00:28:57.079 [2024-04-15 18:15:34.774846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.079 [2024-04-15 18:15:34.774870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.079 [2024-04-15 18:15:34.774887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94624 len:8 PRP1 0x0 PRP2 0x0 00:28:57.079 [2024-04-15 18:15:34.774904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.079 [2024-04-15 18:15:34.774929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.079 [2024-04-15 18:15:34.774940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94632 len:8 PRP1 0x0 PRP2 0x0 00:28:57.079 [2024-04-15 18:15:34.774953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.774967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.079 [2024-04-15 18:15:34.774978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.079 [2024-04-15 18:15:34.774990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94640 len:8 PRP1 0x0 PRP2 0x0 00:28:57.079 [2024-04-15 18:15:34.775003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.775016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.079 [2024-04-15 18:15:34.775027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.079 [2024-04-15 18:15:34.775054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94648 len:8 PRP1 0x0 PRP2 0x0 00:28:57.079 [2024-04-15 18:15:34.775075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.079 [2024-04-15 18:15:34.775090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.079 [2024-04-15 18:15:34.775103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.079 [2024-04-15 18:15:34.775115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94656 len:8 PRP1 0x0 PRP2 0x0 00:28:57.080 [2024-04-15 18:15:34.775128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.775147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.080 [2024-04-15 18:15:34.775160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.080 [2024-04-15 18:15:34.775172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94664 len:8 PRP1 0x0 PRP2 0x0 00:28:57.080 [2024-04-15 18:15:34.775186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.775199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.080 [2024-04-15 18:15:34.775211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.080 [2024-04-15 18:15:34.775223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94672 len:8 PRP1 0x0 PRP2 0x0 00:28:57.080 [2024-04-15 18:15:34.775236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.775250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.080 [2024-04-15 18:15:34.775261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.080 [2024-04-15 18:15:34.775273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94680 len:8 PRP1 0x0 PRP2 0x0 00:28:57.080 [2024-04-15 18:15:34.775286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.775300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.080 [2024-04-15 18:15:34.775315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.080 [2024-04-15 18:15:34.775329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94688 len:8 PRP1 0x0 PRP2 0x0 00:28:57.080 [2024-04-15 18:15:34.775343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.775372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.080 [2024-04-15 18:15:34.775384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.080 [2024-04-15 18:15:34.775396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94696 len:8 PRP1 0x0 PRP2 0x0 00:28:57.080 [2024-04-15 18:15:34.775409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.775423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.080 [2024-04-15 18:15:34.775434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.080 [2024-04-15 18:15:34.775446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94704 len:8 PRP1 0x0 PRP2 0x0 00:28:57.080 [2024-04-15 18:15:34.775459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.775472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.080 [2024-04-15 18:15:34.775484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.080 [2024-04-15 18:15:34.775496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94712 len:8 PRP1 0x0 PRP2 0x0 00:28:57.080 [2024-04-15 18:15:34.775509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.775522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.080 [2024-04-15 18:15:34.775533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.080 [2024-04-15 18:15:34.775544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94720 len:8 PRP1 0x0 PRP2 0x0 00:28:57.080 [2024-04-15 18:15:34.775557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.775576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.080 [2024-04-15 18:15:34.775588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.080 [2024-04-15 18:15:34.775600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94728 len:8 PRP1 0x0 PRP2 0x0 00:28:57.080 [2024-04-15 18:15:34.775612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.775625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.080 [2024-04-15 18:15:34.775637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.080 [2024-04-15 18:15:34.775649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94040 len:8 PRP1 0x0 PRP2 0x0 00:28:57.080 [2024-04-15 18:15:34.775661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.775675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.080 [2024-04-15 18:15:34.775686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.080 [2024-04-15 18:15:34.775698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94048 len:8 PRP1 0x0 PRP2 0x0 00:28:57.080 [2024-04-15 18:15:34.775710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.775727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.080 [2024-04-15 18:15:34.775739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.080 [2024-04-15 18:15:34.775750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94056 len:8 PRP1 0x0 PRP2 0x0 00:28:57.080 [2024-04-15 18:15:34.775764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.775777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.080 [2024-04-15 18:15:34.775788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.080 [2024-04-15 18:15:34.775800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94064 len:8 PRP1 0x0 PRP2 0x0 00:28:57.080 [2024-04-15 18:15:34.775812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.775826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.080 [2024-04-15 18:15:34.775837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.080 [2024-04-15 18:15:34.775848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94072 len:8 PRP1 0x0 PRP2 0x0 00:28:57.080 [2024-04-15 18:15:34.775861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.775874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.080 [2024-04-15 18:15:34.775886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.080 [2024-04-15 18:15:34.775897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94080 len:8 PRP1 0x0 PRP2 0x0 00:28:57.080 [2024-04-15 18:15:34.775910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.775924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.080 [2024-04-15 18:15:34.775935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.080 [2024-04-15 18:15:34.775946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94088 len:8 PRP1 0x0 PRP2 0x0 00:28:57.080 [2024-04-15 18:15:34.775959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.776023] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf55020 was disconnected and freed. reset controller. 00:28:57.080 [2024-04-15 18:15:34.776065] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:28:57.080 [2024-04-15 18:15:34.776104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.080 [2024-04-15 18:15:34.776123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.776140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.080 [2024-04-15 18:15:34.776154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.776169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.080 [2024-04-15 18:15:34.776183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.776197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.080 [2024-04-15 18:15:34.776215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:34.776230] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.080 [2024-04-15 18:15:34.779502] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.080 [2024-04-15 18:15:34.779542] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48af0 (9): Bad file descriptor 00:28:57.080 [2024-04-15 18:15:34.814425] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:57.080 [2024-04-15 18:15:39.414360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.080 [2024-04-15 18:15:39.414416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:39.414446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.080 [2024-04-15 18:15:39.414462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:39.414479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.080 [2024-04-15 18:15:39.414494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:39.414509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.080 [2024-04-15 18:15:39.414524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.080 [2024-04-15 18:15:39.414539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.080 [2024-04-15 18:15:39.414553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.414569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.414584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.414599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.414613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.414628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.414643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.414658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.414672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.414688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.414702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.414717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.414737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.414753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.414767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.414782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.414797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.414811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.414826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.414841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.414855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.414871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.414885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.414901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.414915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.414931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.414945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.414960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.414973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.414989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.081 [2024-04-15 18:15:39.415592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.081 [2024-04-15 18:15:39.415605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.415620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.415633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.415648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.415661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.415677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.415691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.415705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.415719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.415733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.415747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.415762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.415775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.415790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.415804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.415819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.415832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.415847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.415861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.415875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.415896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.415912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.415926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.415940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.415955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.415970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.415984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.415999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.082 [2024-04-15 18:15:39.416816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.082 [2024-04-15 18:15:39.416831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.416844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.416859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.416873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.416889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.416902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.416917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.416931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.416946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.416961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.416977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.416990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.417978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.417992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.418007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.418021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.418035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.083 [2024-04-15 18:15:39.418073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.083 [2024-04-15 18:15:39.418090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.084 [2024-04-15 18:15:39.418105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.084 [2024-04-15 18:15:39.418121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.084 [2024-04-15 18:15:39.418135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.084 [2024-04-15 18:15:39.418152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.084 [2024-04-15 18:15:39.418166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.084 [2024-04-15 18:15:39.418182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.084 [2024-04-15 18:15:39.418197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.084 [2024-04-15 18:15:39.418212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.084 [2024-04-15 18:15:39.418227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.084 [2024-04-15 18:15:39.418247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.084 [2024-04-15 18:15:39.418267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.084 [2024-04-15 18:15:39.418284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.084 [2024-04-15 18:15:39.418298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.084 [2024-04-15 18:15:39.418314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.084 [2024-04-15 18:15:39.418328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.084 [2024-04-15 18:15:39.418358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf69170 is same with the state(5) to be set 00:28:57.084 [2024-04-15 18:15:39.418377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.084 [2024-04-15 18:15:39.418389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.084 [2024-04-15 18:15:39.418401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32440 len:8 PRP1 0x0 PRP2 0x0 00:28:57.084 [2024-04-15 18:15:39.418429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.084 [2024-04-15 18:15:39.418491] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf69170 was disconnected and freed. reset controller. 00:28:57.084 [2024-04-15 18:15:39.418509] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:28:57.084 [2024-04-15 18:15:39.418541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.084 [2024-04-15 18:15:39.418558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.084 [2024-04-15 18:15:39.418573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.084 [2024-04-15 18:15:39.418586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.084 [2024-04-15 18:15:39.418600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.084 [2024-04-15 18:15:39.418612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.084 [2024-04-15 18:15:39.418641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.084 [2024-04-15 18:15:39.418655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.084 [2024-04-15 18:15:39.418668] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.084 [2024-04-15 18:15:39.418707] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48af0 (9): Bad file descriptor 00:28:57.084 [2024-04-15 18:15:39.421997] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.084 [2024-04-15 18:15:39.453260] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:57.084 00:28:57.084 Latency(us) 00:28:57.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.084 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:57.084 Verification LBA range: start 0x0 length 0x4000 00:28:57.084 NVMe0n1 : 15.01 8669.35 33.86 257.84 0.00 14312.38 758.52 19612.25 00:28:57.084 =================================================================================================================== 00:28:57.084 Total : 8669.35 33.86 257.84 0.00 14312.38 758.52 19612.25 00:28:57.084 Received shutdown signal, test time was about 15.000000 seconds 00:28:57.084 00:28:57.084 Latency(us) 00:28:57.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.084 =================================================================================================================== 00:28:57.084 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:57.084 18:15:45 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:57.084 18:15:45 -- host/failover.sh@65 -- # count=3 00:28:57.084 18:15:45 -- host/failover.sh@67 -- # (( count != 3 )) 00:28:57.084 18:15:45 -- host/failover.sh@73 -- # bdevperf_pid=3427601 00:28:57.084 18:15:45 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:57.084 18:15:45 -- host/failover.sh@75 -- # waitforlisten 3427601 /var/tmp/bdevperf.sock 00:28:57.084 18:15:45 -- common/autotest_common.sh@817 -- # '[' -z 3427601 ']' 00:28:57.084 18:15:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:57.084 18:15:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:57.084 18:15:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:57.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:57.084 18:15:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:57.084 18:15:45 -- common/autotest_common.sh@10 -- # set +x 00:28:57.084 18:15:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:57.084 18:15:45 -- common/autotest_common.sh@850 -- # return 0 00:28:57.084 18:15:45 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:57.084 [2024-04-15 18:15:45.630886] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:57.084 18:15:45 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:57.084 [2024-04-15 18:15:45.963856] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:57.084 18:15:45 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:57.653 NVMe0n1 00:28:57.653 18:15:46 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:57.911 00:28:57.911 18:15:46 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:58.476 00:28:58.476 18:15:47 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:58.476 18:15:47 -- host/failover.sh@82 -- # grep -q NVMe0 00:28:58.735 18:15:47 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:58.993 18:15:47 -- host/failover.sh@87 -- # sleep 3 00:29:02.283 18:15:50 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:02.283 18:15:50 -- host/failover.sh@88 -- # grep -q NVMe0 00:29:02.284 18:15:51 -- host/failover.sh@90 -- # run_test_pid=3428270 00:29:02.284 18:15:51 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:02.284 18:15:51 -- host/failover.sh@92 -- # wait 3428270 00:29:03.659 0 00:29:03.659 18:15:52 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:03.659 [2024-04-15 18:15:45.104856] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:29:03.659 [2024-04-15 18:15:45.104957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3427601 ] 00:29:03.659 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.660 [2024-04-15 18:15:45.172682] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.660 [2024-04-15 18:15:45.254575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.660 [2024-04-15 18:15:47.820221] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:03.660 [2024-04-15 18:15:47.820294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.660 [2024-04-15 18:15:47.820319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.660 [2024-04-15 18:15:47.820336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.660 [2024-04-15 18:15:47.820366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.660 [2024-04-15 18:15:47.820380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.660 [2024-04-15 18:15:47.820394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.660 [2024-04-15 18:15:47.820408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.660 [2024-04-15 18:15:47.820422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.660 [2024-04-15 18:15:47.820436] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.660 [2024-04-15 18:15:47.820483] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.660 [2024-04-15 18:15:47.820515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178faf0 (9): Bad file descriptor 00:29:03.660 [2024-04-15 18:15:47.995221] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:03.660 Running I/O for 1 seconds... 00:29:03.660 00:29:03.660 Latency(us) 00:29:03.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.660 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:03.660 Verification LBA range: start 0x0 length 0x4000 00:29:03.660 NVMe0n1 : 1.02 8536.47 33.35 0.00 0.00 14932.80 2803.48 15146.10 00:29:03.660 =================================================================================================================== 00:29:03.660 Total : 8536.47 33.35 0.00 0.00 14932.80 2803.48 15146.10 00:29:03.660 18:15:52 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:03.660 18:15:52 -- host/failover.sh@95 -- # grep -q NVMe0 00:29:03.919 18:15:52 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:04.178 18:15:53 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:04.178 18:15:53 -- host/failover.sh@99 -- # grep -q NVMe0 00:29:05.116 18:15:53 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:05.376 18:15:54 -- host/failover.sh@101 -- # sleep 3 00:29:08.668 18:15:57 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:08.668 18:15:57 -- host/failover.sh@103 -- # grep -q NVMe0 00:29:08.668 18:15:57 -- host/failover.sh@108 -- # killprocess 3427601 00:29:08.668 18:15:57 -- common/autotest_common.sh@936 -- # '[' -z 3427601 ']' 00:29:08.668 18:15:57 -- common/autotest_common.sh@940 -- # kill -0 3427601 00:29:08.668 18:15:57 -- common/autotest_common.sh@941 -- # uname 00:29:08.668 18:15:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:08.668 18:15:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3427601 00:29:08.668 18:15:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:08.668 18:15:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:08.668 18:15:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3427601' 00:29:08.668 killing process with pid 3427601 00:29:08.668 18:15:57 -- common/autotest_common.sh@955 -- # kill 3427601 00:29:08.668 18:15:57 -- common/autotest_common.sh@960 -- # wait 3427601 00:29:08.926 18:15:57 -- host/failover.sh@110 -- # sync 00:29:08.926 18:15:57 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:09.184 18:15:58 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:09.184 18:15:58 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:09.184 18:15:58 -- host/failover.sh@116 -- # nvmftestfini 00:29:09.184 18:15:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:09.184 18:15:58 -- nvmf/common.sh@117 -- # sync 00:29:09.184 18:15:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:09.184 18:15:58 -- nvmf/common.sh@120 -- # set +e 00:29:09.184 18:15:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:09.184 18:15:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:09.184 rmmod nvme_tcp 00:29:09.184 rmmod nvme_fabrics 00:29:09.184 rmmod nvme_keyring 00:29:09.184 18:15:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:09.184 18:15:58 -- nvmf/common.sh@124 -- # set -e 00:29:09.184 18:15:58 -- nvmf/common.sh@125 -- # return 0 00:29:09.184 18:15:58 -- nvmf/common.sh@478 -- # '[' -n 3425325 ']' 00:29:09.184 18:15:58 -- nvmf/common.sh@479 -- # killprocess 3425325 00:29:09.184 18:15:58 -- common/autotest_common.sh@936 -- # '[' -z 3425325 ']' 00:29:09.184 18:15:58 -- common/autotest_common.sh@940 -- # kill -0 3425325 00:29:09.184 18:15:58 -- common/autotest_common.sh@941 -- # uname 00:29:09.184 18:15:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:09.184 18:15:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3425325 00:29:09.443 18:15:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:09.443 18:15:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:09.443 18:15:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3425325' 00:29:09.443 killing process with pid 3425325 00:29:09.443 18:15:58 -- common/autotest_common.sh@955 -- # kill 3425325 00:29:09.443 18:15:58 -- common/autotest_common.sh@960 -- # wait 3425325 00:29:09.703 18:15:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:09.703 18:15:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:09.703 18:15:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:09.703 18:15:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:09.703 18:15:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:09.703 18:15:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.703 18:15:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:09.703 18:15:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.614 18:16:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:11.614 00:29:11.614 real 0m37.486s 00:29:11.614 user 2m13.746s 00:29:11.614 sys 0m6.808s 00:29:11.614 18:16:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:11.614 18:16:00 -- common/autotest_common.sh@10 -- # set +x 00:29:11.614 ************************************ 00:29:11.614 END TEST nvmf_failover 00:29:11.614 ************************************ 00:29:11.614 18:16:00 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:11.614 18:16:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:11.614 18:16:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:11.614 18:16:00 -- common/autotest_common.sh@10 -- # set +x 00:29:11.892 ************************************ 00:29:11.892 START TEST nvmf_discovery 00:29:11.892 ************************************ 00:29:11.892 18:16:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:11.892 * Looking for test storage... 00:29:11.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:11.892 18:16:00 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.892 18:16:00 -- nvmf/common.sh@7 -- # uname -s 00:29:11.892 18:16:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.892 18:16:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.892 18:16:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.892 18:16:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.892 18:16:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.892 18:16:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.892 18:16:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.892 18:16:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.892 18:16:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.892 18:16:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.892 18:16:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:11.892 18:16:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:11.892 18:16:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.892 18:16:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.892 18:16:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.892 18:16:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.892 18:16:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.892 18:16:00 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.892 18:16:00 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.892 18:16:00 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.892 18:16:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.892 18:16:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.892 18:16:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.892 18:16:00 -- paths/export.sh@5 -- # export PATH 00:29:11.892 18:16:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.892 18:16:00 -- nvmf/common.sh@47 -- # : 0 00:29:11.892 18:16:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:11.892 18:16:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:11.892 18:16:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.892 18:16:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.892 18:16:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.892 18:16:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:11.893 18:16:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:11.893 18:16:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:11.893 18:16:00 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:11.893 18:16:00 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:11.893 18:16:00 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:11.893 18:16:00 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:11.893 18:16:00 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:11.893 18:16:00 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:11.893 18:16:00 -- host/discovery.sh@25 -- # nvmftestinit 00:29:11.893 18:16:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:11.893 18:16:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.893 18:16:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:11.893 18:16:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:11.893 18:16:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:11.893 18:16:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.893 18:16:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:11.893 18:16:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.893 18:16:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:29:11.893 18:16:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:11.893 18:16:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:11.893 18:16:00 -- common/autotest_common.sh@10 -- # set +x 00:29:14.426 18:16:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:14.426 18:16:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:14.426 18:16:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:14.426 18:16:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:14.426 18:16:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:14.426 18:16:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:14.426 18:16:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:14.426 18:16:02 -- nvmf/common.sh@295 -- # net_devs=() 00:29:14.426 18:16:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:14.426 18:16:02 -- nvmf/common.sh@296 -- # e810=() 00:29:14.426 18:16:02 -- nvmf/common.sh@296 -- # local -ga e810 00:29:14.426 18:16:02 -- nvmf/common.sh@297 -- # x722=() 00:29:14.426 18:16:02 -- nvmf/common.sh@297 -- # local -ga x722 00:29:14.426 18:16:02 -- nvmf/common.sh@298 -- # mlx=() 00:29:14.426 18:16:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:14.426 18:16:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.426 18:16:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.426 18:16:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.426 18:16:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.426 18:16:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.426 18:16:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.426 18:16:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.426 18:16:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.426 18:16:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.426 18:16:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.426 18:16:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.426 18:16:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:14.426 18:16:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:14.426 18:16:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:14.426 18:16:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:14.426 18:16:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:14.426 18:16:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:14.426 18:16:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:14.426 18:16:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:14.426 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:14.426 18:16:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:14.426 18:16:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:14.426 18:16:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.426 18:16:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.426 18:16:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:14.426 18:16:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:14.426 18:16:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:14.426 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:14.426 18:16:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:14.426 18:16:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:14.426 18:16:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.426 18:16:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.426 18:16:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:14.426 18:16:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:14.426 18:16:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:14.426 18:16:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:14.426 18:16:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:14.426 18:16:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.426 18:16:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:14.426 18:16:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.426 18:16:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:14.426 Found net devices under 0000:84:00.0: cvl_0_0 00:29:14.426 18:16:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.426 18:16:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:14.426 18:16:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.426 18:16:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:14.426 18:16:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.426 18:16:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:14.426 Found net devices under 0000:84:00.1: cvl_0_1 00:29:14.426 18:16:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.426 18:16:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:14.426 18:16:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:14.426 18:16:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:14.426 18:16:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:14.426 18:16:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:14.426 18:16:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.426 18:16:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.426 18:16:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.426 18:16:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:14.426 18:16:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.426 18:16:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.426 18:16:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:14.426 18:16:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.426 18:16:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.426 18:16:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:14.426 18:16:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:14.426 18:16:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.426 18:16:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.426 18:16:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.426 18:16:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.426 18:16:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:14.426 18:16:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.426 18:16:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.426 18:16:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.426 18:16:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:14.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:29:14.426 00:29:14.426 --- 10.0.0.2 ping statistics --- 00:29:14.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.426 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:29:14.426 18:16:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:29:14.426 00:29:14.426 --- 10.0.0.1 ping statistics --- 00:29:14.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.426 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:29:14.426 18:16:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.426 18:16:02 -- nvmf/common.sh@411 -- # return 0 00:29:14.427 18:16:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:14.427 18:16:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.427 18:16:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:14.427 18:16:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:14.427 18:16:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.427 18:16:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:14.427 18:16:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:14.427 18:16:03 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:14.427 18:16:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:14.427 18:16:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:14.427 18:16:03 -- common/autotest_common.sh@10 -- # set +x 00:29:14.427 18:16:03 -- nvmf/common.sh@470 -- # nvmfpid=3431135 00:29:14.427 18:16:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:14.427 18:16:03 -- nvmf/common.sh@471 -- # waitforlisten 3431135 00:29:14.427 18:16:03 -- common/autotest_common.sh@817 -- # '[' -z 3431135 ']' 00:29:14.427 18:16:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.427 18:16:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:14.427 18:16:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.427 18:16:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:14.427 18:16:03 -- common/autotest_common.sh@10 -- # set +x 00:29:14.427 [2024-04-15 18:16:03.072912] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:29:14.427 [2024-04-15 18:16:03.073013] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.427 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.427 [2024-04-15 18:16:03.151651] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.427 [2024-04-15 18:16:03.244580] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.427 [2024-04-15 18:16:03.244630] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.427 [2024-04-15 18:16:03.244647] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.427 [2024-04-15 18:16:03.244662] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.427 [2024-04-15 18:16:03.244675] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.427 [2024-04-15 18:16:03.244715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.427 18:16:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:14.427 18:16:03 -- common/autotest_common.sh@850 -- # return 0 00:29:14.427 18:16:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:14.427 18:16:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:14.427 18:16:03 -- common/autotest_common.sh@10 -- # set +x 00:29:14.687 18:16:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.687 18:16:03 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:14.687 18:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.687 18:16:03 -- common/autotest_common.sh@10 -- # set +x 00:29:14.687 [2024-04-15 18:16:03.394315] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.687 18:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.687 18:16:03 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:14.687 18:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.687 18:16:03 -- common/autotest_common.sh@10 -- # set +x 00:29:14.687 [2024-04-15 18:16:03.402504] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:14.687 18:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.687 18:16:03 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:14.687 18:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.687 18:16:03 -- common/autotest_common.sh@10 -- # set +x 00:29:14.687 null0 00:29:14.687 18:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.687 18:16:03 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:14.687 18:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.687 18:16:03 -- common/autotest_common.sh@10 -- # set +x 00:29:14.687 null1 00:29:14.687 18:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.687 18:16:03 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:14.687 18:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.687 18:16:03 -- common/autotest_common.sh@10 -- # set +x 00:29:14.687 18:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.687 18:16:03 -- host/discovery.sh@45 -- # hostpid=3431161 00:29:14.687 18:16:03 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:14.687 18:16:03 -- host/discovery.sh@46 -- # waitforlisten 3431161 /tmp/host.sock 00:29:14.687 18:16:03 -- common/autotest_common.sh@817 -- # '[' -z 3431161 ']' 00:29:14.687 18:16:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:29:14.687 18:16:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:14.687 18:16:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:14.687 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:14.687 18:16:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:14.687 18:16:03 -- common/autotest_common.sh@10 -- # set +x 00:29:14.687 [2024-04-15 18:16:03.478442] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:29:14.687 [2024-04-15 18:16:03.478525] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3431161 ] 00:29:14.687 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.687 [2024-04-15 18:16:03.547300] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.945 [2024-04-15 18:16:03.641198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.945 18:16:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:14.945 18:16:03 -- common/autotest_common.sh@850 -- # return 0 00:29:14.945 18:16:03 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:14.945 18:16:03 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:14.945 18:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.945 18:16:03 -- common/autotest_common.sh@10 -- # set +x 00:29:14.945 18:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.945 18:16:03 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:14.945 18:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.945 18:16:03 -- common/autotest_common.sh@10 -- # set +x 00:29:14.945 18:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.945 18:16:03 -- host/discovery.sh@72 -- # notify_id=0 00:29:14.945 18:16:03 -- host/discovery.sh@83 -- # get_subsystem_names 00:29:14.945 18:16:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:14.945 18:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.945 18:16:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:14.945 18:16:03 -- common/autotest_common.sh@10 -- # set +x 00:29:14.945 18:16:03 -- host/discovery.sh@59 -- # sort 00:29:14.945 18:16:03 -- host/discovery.sh@59 -- # xargs 00:29:14.945 18:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.204 18:16:03 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:15.204 18:16:03 -- host/discovery.sh@84 -- # get_bdev_list 00:29:15.204 18:16:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:15.204 18:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.204 18:16:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:15.204 18:16:03 -- common/autotest_common.sh@10 -- # set +x 00:29:15.204 18:16:03 -- host/discovery.sh@55 -- # sort 00:29:15.204 18:16:03 -- host/discovery.sh@55 -- # xargs 00:29:15.204 18:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.204 18:16:03 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:29:15.204 18:16:03 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:15.204 18:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.204 18:16:03 -- common/autotest_common.sh@10 -- # set +x 00:29:15.204 18:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.204 18:16:03 -- host/discovery.sh@87 -- # get_subsystem_names 00:29:15.204 18:16:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:15.204 18:16:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:15.204 18:16:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.204 18:16:03 -- host/discovery.sh@59 -- # sort 00:29:15.204 18:16:03 -- common/autotest_common.sh@10 -- # set +x 00:29:15.204 18:16:03 -- host/discovery.sh@59 -- # xargs 00:29:15.204 18:16:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.204 18:16:04 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:15.204 18:16:04 -- host/discovery.sh@88 -- # get_bdev_list 00:29:15.204 18:16:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:15.204 18:16:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.204 18:16:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:15.204 18:16:04 -- common/autotest_common.sh@10 -- # set +x 00:29:15.204 18:16:04 -- host/discovery.sh@55 -- # sort 00:29:15.204 18:16:04 -- host/discovery.sh@55 -- # xargs 00:29:15.204 18:16:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.204 18:16:04 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:29:15.204 18:16:04 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:15.204 18:16:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.204 18:16:04 -- common/autotest_common.sh@10 -- # set +x 00:29:15.204 18:16:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.204 18:16:04 -- host/discovery.sh@91 -- # get_subsystem_names 00:29:15.204 18:16:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:15.204 18:16:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.204 18:16:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:15.204 18:16:04 -- host/discovery.sh@59 -- # sort 00:29:15.204 18:16:04 -- common/autotest_common.sh@10 -- # set +x 00:29:15.204 18:16:04 -- host/discovery.sh@59 -- # xargs 00:29:15.204 18:16:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.204 18:16:04 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:29:15.204 18:16:04 -- host/discovery.sh@92 -- # get_bdev_list 00:29:15.204 18:16:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:15.204 18:16:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:15.204 18:16:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.204 18:16:04 -- host/discovery.sh@55 -- # sort 00:29:15.204 18:16:04 -- common/autotest_common.sh@10 -- # set +x 00:29:15.204 18:16:04 -- host/discovery.sh@55 -- # xargs 00:29:15.204 18:16:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.204 18:16:04 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:15.204 18:16:04 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:15.204 18:16:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.204 18:16:04 -- common/autotest_common.sh@10 -- # set +x 00:29:15.204 [2024-04-15 18:16:04.152512] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.204 18:16:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.462 18:16:04 -- host/discovery.sh@97 -- # get_subsystem_names 00:29:15.462 18:16:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:15.462 18:16:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.462 18:16:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:15.462 18:16:04 -- common/autotest_common.sh@10 -- # set +x 00:29:15.462 18:16:04 -- host/discovery.sh@59 -- # sort 00:29:15.462 18:16:04 -- host/discovery.sh@59 -- # xargs 00:29:15.462 18:16:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.462 18:16:04 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:29:15.462 18:16:04 -- host/discovery.sh@98 -- # get_bdev_list 00:29:15.462 18:16:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:15.462 18:16:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.462 18:16:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:15.462 18:16:04 -- common/autotest_common.sh@10 -- # set +x 00:29:15.462 18:16:04 -- host/discovery.sh@55 -- # sort 00:29:15.462 18:16:04 -- host/discovery.sh@55 -- # xargs 00:29:15.462 18:16:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.462 18:16:04 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:29:15.462 18:16:04 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:29:15.462 18:16:04 -- host/discovery.sh@79 -- # expected_count=0 00:29:15.462 18:16:04 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:15.462 18:16:04 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:15.462 18:16:04 -- common/autotest_common.sh@901 -- # local max=10 00:29:15.462 18:16:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:15.462 18:16:04 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:15.462 18:16:04 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:15.462 18:16:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:15.462 18:16:04 -- host/discovery.sh@74 -- # jq '. | length' 00:29:15.462 18:16:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.462 18:16:04 -- common/autotest_common.sh@10 -- # set +x 00:29:15.462 18:16:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.462 18:16:04 -- host/discovery.sh@74 -- # notification_count=0 00:29:15.462 18:16:04 -- host/discovery.sh@75 -- # notify_id=0 00:29:15.462 18:16:04 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:15.462 18:16:04 -- common/autotest_common.sh@904 -- # return 0 00:29:15.462 18:16:04 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:15.462 18:16:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.462 18:16:04 -- common/autotest_common.sh@10 -- # set +x 00:29:15.462 18:16:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.462 18:16:04 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:15.462 18:16:04 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:15.462 18:16:04 -- common/autotest_common.sh@901 -- # local max=10 00:29:15.462 18:16:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:15.462 18:16:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:15.462 18:16:04 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:15.462 18:16:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:15.462 18:16:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.462 18:16:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:15.462 18:16:04 -- common/autotest_common.sh@10 -- # set +x 00:29:15.462 18:16:04 -- host/discovery.sh@59 -- # sort 00:29:15.462 18:16:04 -- host/discovery.sh@59 -- # xargs 00:29:15.462 18:16:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.462 18:16:04 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:29:15.462 18:16:04 -- common/autotest_common.sh@906 -- # sleep 1 00:29:16.030 [2024-04-15 18:16:04.923982] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:16.030 [2024-04-15 18:16:04.924015] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:16.030 [2024-04-15 18:16:04.924042] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:16.289 [2024-04-15 18:16:05.053462] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:16.548 [2024-04-15 18:16:05.275162] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:16.548 [2024-04-15 18:16:05.275192] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:16.548 18:16:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:16.548 18:16:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:16.548 18:16:05 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:16.548 18:16:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:16.548 18:16:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:16.548 18:16:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.548 18:16:05 -- host/discovery.sh@59 -- # sort 00:29:16.548 18:16:05 -- common/autotest_common.sh@10 -- # set +x 00:29:16.548 18:16:05 -- host/discovery.sh@59 -- # xargs 00:29:16.548 18:16:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:16.548 18:16:05 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.548 18:16:05 -- common/autotest_common.sh@904 -- # return 0 00:29:16.548 18:16:05 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:16.548 18:16:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:16.548 18:16:05 -- common/autotest_common.sh@901 -- # local max=10 00:29:16.548 18:16:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:16.548 18:16:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:29:16.548 18:16:05 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:16.548 18:16:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:16.548 18:16:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:16.548 18:16:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.548 18:16:05 -- common/autotest_common.sh@10 -- # set +x 00:29:16.548 18:16:05 -- host/discovery.sh@55 -- # sort 00:29:16.548 18:16:05 -- host/discovery.sh@55 -- # xargs 00:29:16.548 18:16:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:16.548 18:16:05 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:16.548 18:16:05 -- common/autotest_common.sh@904 -- # return 0 00:29:16.548 18:16:05 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:16.548 18:16:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:16.548 18:16:05 -- common/autotest_common.sh@901 -- # local max=10 00:29:16.548 18:16:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:16.548 18:16:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:29:16.548 18:16:05 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:29:16.548 18:16:05 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:16.548 18:16:05 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:16.548 18:16:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.548 18:16:05 -- common/autotest_common.sh@10 -- # set +x 00:29:16.548 18:16:05 -- host/discovery.sh@63 -- # sort -n 00:29:16.548 18:16:05 -- host/discovery.sh@63 -- # xargs 00:29:16.548 18:16:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:16.807 18:16:05 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:29:16.807 18:16:05 -- common/autotest_common.sh@904 -- # return 0 00:29:16.807 18:16:05 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:29:16.807 18:16:05 -- host/discovery.sh@79 -- # expected_count=1 00:29:16.807 18:16:05 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:16.807 18:16:05 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:16.807 18:16:05 -- common/autotest_common.sh@901 -- # local max=10 00:29:16.807 18:16:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:16.807 18:16:05 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:16.807 18:16:05 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:16.807 18:16:05 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:16.807 18:16:05 -- host/discovery.sh@74 -- # jq '. | length' 00:29:16.807 18:16:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.807 18:16:05 -- common/autotest_common.sh@10 -- # set +x 00:29:16.807 18:16:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:16.807 18:16:05 -- host/discovery.sh@74 -- # notification_count=1 00:29:16.807 18:16:05 -- host/discovery.sh@75 -- # notify_id=1 00:29:16.807 18:16:05 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:16.807 18:16:05 -- common/autotest_common.sh@904 -- # return 0 00:29:16.807 18:16:05 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:16.807 18:16:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.807 18:16:05 -- common/autotest_common.sh@10 -- # set +x 00:29:16.807 18:16:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:16.807 18:16:05 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:16.807 18:16:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:16.807 18:16:05 -- common/autotest_common.sh@901 -- # local max=10 00:29:16.807 18:16:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:16.807 18:16:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:16.807 18:16:05 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:16.807 18:16:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:16.807 18:16:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:16.807 18:16:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.807 18:16:05 -- common/autotest_common.sh@10 -- # set +x 00:29:16.807 18:16:05 -- host/discovery.sh@55 -- # sort 00:29:16.807 18:16:05 -- host/discovery.sh@55 -- # xargs 00:29:16.807 18:16:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:16.807 18:16:05 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:16.807 18:16:05 -- common/autotest_common.sh@904 -- # return 0 00:29:16.807 18:16:05 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:29:16.807 18:16:05 -- host/discovery.sh@79 -- # expected_count=1 00:29:16.807 18:16:05 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:16.807 18:16:05 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:16.807 18:16:05 -- common/autotest_common.sh@901 -- # local max=10 00:29:16.807 18:16:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:16.807 18:16:05 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:16.807 18:16:05 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:16.807 18:16:05 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:16.807 18:16:05 -- host/discovery.sh@74 -- # jq '. | length' 00:29:16.807 18:16:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.807 18:16:05 -- common/autotest_common.sh@10 -- # set +x 00:29:16.807 18:16:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:16.807 18:16:05 -- host/discovery.sh@74 -- # notification_count=1 00:29:16.807 18:16:05 -- host/discovery.sh@75 -- # notify_id=2 00:29:16.807 18:16:05 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:16.807 18:16:05 -- common/autotest_common.sh@904 -- # return 0 00:29:16.807 18:16:05 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:16.807 18:16:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.807 18:16:05 -- common/autotest_common.sh@10 -- # set +x 00:29:16.807 [2024-04-15 18:16:05.656918] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:16.807 [2024-04-15 18:16:05.657214] bdev_nvme.c:6880:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:16.807 [2024-04-15 18:16:05.657249] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:16.807 18:16:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:16.807 18:16:05 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:16.807 18:16:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:16.807 18:16:05 -- common/autotest_common.sh@901 -- # local max=10 00:29:16.807 18:16:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:16.807 18:16:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:16.807 18:16:05 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:16.807 18:16:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:16.807 18:16:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.807 18:16:05 -- common/autotest_common.sh@10 -- # set +x 00:29:16.807 18:16:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:16.807 18:16:05 -- host/discovery.sh@59 -- # sort 00:29:16.807 18:16:05 -- host/discovery.sh@59 -- # xargs 00:29:16.807 18:16:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:16.807 18:16:05 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.807 18:16:05 -- common/autotest_common.sh@904 -- # return 0 00:29:16.807 18:16:05 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:16.807 18:16:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:16.807 18:16:05 -- common/autotest_common.sh@901 -- # local max=10 00:29:16.807 18:16:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:16.807 18:16:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:16.807 18:16:05 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:16.807 18:16:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:16.807 18:16:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.807 18:16:05 -- common/autotest_common.sh@10 -- # set +x 00:29:16.807 18:16:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:16.807 18:16:05 -- host/discovery.sh@55 -- # sort 00:29:16.807 18:16:05 -- host/discovery.sh@55 -- # xargs 00:29:16.807 18:16:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:16.807 18:16:05 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:16.807 18:16:05 -- common/autotest_common.sh@904 -- # return 0 00:29:16.807 18:16:05 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:16.807 18:16:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:16.807 18:16:05 -- common/autotest_common.sh@901 -- # local max=10 00:29:16.807 18:16:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:16.807 18:16:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:16.807 18:16:05 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:29:16.807 18:16:05 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:16.807 18:16:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.807 18:16:05 -- common/autotest_common.sh@10 -- # set +x 00:29:16.807 18:16:05 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:16.807 18:16:05 -- host/discovery.sh@63 -- # sort -n 00:29:16.807 18:16:05 -- host/discovery.sh@63 -- # xargs 00:29:16.807 18:16:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.066 [2024-04-15 18:16:05.783841] bdev_nvme.c:6822:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:17.066 18:16:05 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:29:17.066 18:16:05 -- common/autotest_common.sh@906 -- # sleep 1 00:29:17.066 [2024-04-15 18:16:05.845400] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:17.066 [2024-04-15 18:16:05.845427] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:17.066 [2024-04-15 18:16:05.845438] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:18.004 18:16:06 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:18.004 18:16:06 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:18.004 18:16:06 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:29:18.004 18:16:06 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:18.004 18:16:06 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:18.004 18:16:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.004 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:29:18.004 18:16:06 -- host/discovery.sh@63 -- # sort -n 00:29:18.004 18:16:06 -- host/discovery.sh@63 -- # xargs 00:29:18.004 18:16:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.004 18:16:06 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:18.004 18:16:06 -- common/autotest_common.sh@904 -- # return 0 00:29:18.004 18:16:06 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:29:18.004 18:16:06 -- host/discovery.sh@79 -- # expected_count=0 00:29:18.004 18:16:06 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:18.004 18:16:06 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:18.004 18:16:06 -- common/autotest_common.sh@901 -- # local max=10 00:29:18.004 18:16:06 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:18.004 18:16:06 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:18.004 18:16:06 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:18.004 18:16:06 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:18.004 18:16:06 -- host/discovery.sh@74 -- # jq '. | length' 00:29:18.004 18:16:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.004 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:29:18.004 18:16:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.004 18:16:06 -- host/discovery.sh@74 -- # notification_count=0 00:29:18.004 18:16:06 -- host/discovery.sh@75 -- # notify_id=2 00:29:18.004 18:16:06 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:18.004 18:16:06 -- common/autotest_common.sh@904 -- # return 0 00:29:18.004 18:16:06 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:18.004 18:16:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.004 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:29:18.004 [2024-04-15 18:16:06.916811] bdev_nvme.c:6880:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:18.004 [2024-04-15 18:16:06.916845] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:18.004 18:16:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.004 18:16:06 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:18.004 18:16:06 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:18.004 18:16:06 -- common/autotest_common.sh@901 -- # local max=10 00:29:18.004 18:16:06 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:18.004 18:16:06 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:18.004 18:16:06 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:18.004 18:16:06 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:18.004 18:16:06 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:18.004 18:16:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.004 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:29:18.004 18:16:06 -- host/discovery.sh@59 -- # sort 00:29:18.004 18:16:06 -- host/discovery.sh@59 -- # xargs 00:29:18.004 [2024-04-15 18:16:06.925932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.004 [2024-04-15 18:16:06.925969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.004 [2024-04-15 18:16:06.925989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.004 [2024-04-15 18:16:06.926006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.004 [2024-04-15 18:16:06.926022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.004 [2024-04-15 18:16:06.926038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.004 [2024-04-15 18:16:06.926055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.004 [2024-04-15 18:16:06.926080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.004 [2024-04-15 18:16:06.926096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884260 is same with the state(5) to be set 00:29:18.004 [2024-04-15 18:16:06.935936] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x884260 (9): Bad file descriptor 00:29:18.004 18:16:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.004 [2024-04-15 18:16:06.945983] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:18.004 [2024-04-15 18:16:06.946235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.004 [2024-04-15 18:16:06.946444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.004 [2024-04-15 18:16:06.946479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x884260 with addr=10.0.0.2, port=4420 00:29:18.004 [2024-04-15 18:16:06.946500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884260 is same with the state(5) to be set 00:29:18.004 [2024-04-15 18:16:06.946525] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x884260 (9): Bad file descriptor 00:29:18.004 [2024-04-15 18:16:06.946564] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:18.004 [2024-04-15 18:16:06.946585] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:18.004 [2024-04-15 18:16:06.946603] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:18.004 [2024-04-15 18:16:06.946628] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.004 [2024-04-15 18:16:06.956074] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:18.004 [2024-04-15 18:16:06.956399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.004 [2024-04-15 18:16:06.956611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.004 [2024-04-15 18:16:06.956644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x884260 with addr=10.0.0.2, port=4420 00:29:18.004 [2024-04-15 18:16:06.956663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884260 is same with the state(5) to be set 00:29:18.004 [2024-04-15 18:16:06.956688] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x884260 (9): Bad file descriptor 00:29:18.004 [2024-04-15 18:16:06.956712] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:18.004 [2024-04-15 18:16:06.956728] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:18.004 [2024-04-15 18:16:06.956744] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:18.004 [2024-04-15 18:16:06.956765] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.265 [2024-04-15 18:16:06.966153] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:18.265 [2024-04-15 18:16:06.966457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.265 [2024-04-15 18:16:06.966671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.265 [2024-04-15 18:16:06.966701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x884260 with addr=10.0.0.2, port=4420 00:29:18.265 [2024-04-15 18:16:06.966720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884260 is same with the state(5) to be set 00:29:18.265 [2024-04-15 18:16:06.966745] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x884260 (9): Bad file descriptor 00:29:18.265 [2024-04-15 18:16:06.966784] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:18.265 [2024-04-15 18:16:06.966806] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:18.265 [2024-04-15 18:16:06.966822] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:18.265 [2024-04-15 18:16:06.966845] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.265 18:16:06 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.265 18:16:06 -- common/autotest_common.sh@904 -- # return 0 00:29:18.265 18:16:06 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:18.265 18:16:06 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:18.265 18:16:06 -- common/autotest_common.sh@901 -- # local max=10 00:29:18.265 18:16:06 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:18.265 18:16:06 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:18.265 18:16:06 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:18.265 18:16:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:18.265 18:16:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:18.265 18:16:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.265 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:29:18.265 18:16:06 -- host/discovery.sh@55 -- # sort 00:29:18.265 18:16:06 -- host/discovery.sh@55 -- # xargs 00:29:18.265 [2024-04-15 18:16:06.976235] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:18.265 [2024-04-15 18:16:06.976439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.265 [2024-04-15 18:16:06.976634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.265 [2024-04-15 18:16:06.976664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x884260 with addr=10.0.0.2, port=4420 00:29:18.265 [2024-04-15 18:16:06.976684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884260 is same with the state(5) to be set 00:29:18.265 [2024-04-15 18:16:06.976710] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x884260 (9): Bad file descriptor 00:29:18.265 [2024-04-15 18:16:06.976748] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:18.265 [2024-04-15 18:16:06.976769] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:18.265 [2024-04-15 18:16:06.976785] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:18.265 [2024-04-15 18:16:06.976808] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.265 [2024-04-15 18:16:06.986314] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:18.265 [2024-04-15 18:16:06.986566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.265 [2024-04-15 18:16:06.986766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.266 [2024-04-15 18:16:06.986795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x884260 with addr=10.0.0.2, port=4420 00:29:18.266 [2024-04-15 18:16:06.986814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884260 is same with the state(5) to be set 00:29:18.266 [2024-04-15 18:16:06.986839] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x884260 (9): Bad file descriptor 00:29:18.266 [2024-04-15 18:16:06.986891] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:18.266 [2024-04-15 18:16:06.986914] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:18.266 [2024-04-15 18:16:06.986931] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:18.266 [2024-04-15 18:16:06.986953] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.266 [2024-04-15 18:16:06.996389] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:18.266 [2024-04-15 18:16:06.996587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.266 [2024-04-15 18:16:06.996772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.266 [2024-04-15 18:16:06.996801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x884260 with addr=10.0.0.2, port=4420 00:29:18.266 [2024-04-15 18:16:06.996820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884260 is same with the state(5) to be set 00:29:18.266 [2024-04-15 18:16:06.996845] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x884260 (9): Bad file descriptor 00:29:18.266 [2024-04-15 18:16:06.996869] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:18.266 [2024-04-15 18:16:06.996891] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:18.266 [2024-04-15 18:16:06.996907] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:18.266 [2024-04-15 18:16:06.996930] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.266 18:16:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.266 [2024-04-15 18:16:07.004985] bdev_nvme.c:6685:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:18.266 [2024-04-15 18:16:07.005019] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:18.266 18:16:07 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:18.266 18:16:07 -- common/autotest_common.sh@904 -- # return 0 00:29:18.266 18:16:07 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:18.266 18:16:07 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:18.266 18:16:07 -- common/autotest_common.sh@901 -- # local max=10 00:29:18.266 18:16:07 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:18.266 18:16:07 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:29:18.266 18:16:07 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:29:18.266 18:16:07 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:18.266 18:16:07 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:18.266 18:16:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.266 18:16:07 -- common/autotest_common.sh@10 -- # set +x 00:29:18.266 18:16:07 -- host/discovery.sh@63 -- # sort -n 00:29:18.266 18:16:07 -- host/discovery.sh@63 -- # xargs 00:29:18.266 18:16:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.266 18:16:07 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:29:18.266 18:16:07 -- common/autotest_common.sh@904 -- # return 0 00:29:18.266 18:16:07 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:29:18.266 18:16:07 -- host/discovery.sh@79 -- # expected_count=0 00:29:18.266 18:16:07 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:18.266 18:16:07 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:18.266 18:16:07 -- common/autotest_common.sh@901 -- # local max=10 00:29:18.266 18:16:07 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:18.266 18:16:07 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:18.266 18:16:07 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:18.266 18:16:07 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:18.266 18:16:07 -- host/discovery.sh@74 -- # jq '. | length' 00:29:18.266 18:16:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.266 18:16:07 -- common/autotest_common.sh@10 -- # set +x 00:29:18.266 18:16:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.266 18:16:07 -- host/discovery.sh@74 -- # notification_count=0 00:29:18.266 18:16:07 -- host/discovery.sh@75 -- # notify_id=2 00:29:18.266 18:16:07 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:18.266 18:16:07 -- common/autotest_common.sh@904 -- # return 0 00:29:18.266 18:16:07 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:18.266 18:16:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.266 18:16:07 -- common/autotest_common.sh@10 -- # set +x 00:29:18.266 18:16:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.266 18:16:07 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:29:18.266 18:16:07 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:29:18.266 18:16:07 -- common/autotest_common.sh@901 -- # local max=10 00:29:18.266 18:16:07 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:18.266 18:16:07 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:29:18.266 18:16:07 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:18.266 18:16:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:18.266 18:16:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:18.266 18:16:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.266 18:16:07 -- common/autotest_common.sh@10 -- # set +x 00:29:18.266 18:16:07 -- host/discovery.sh@59 -- # sort 00:29:18.266 18:16:07 -- host/discovery.sh@59 -- # xargs 00:29:18.266 18:16:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.266 18:16:07 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:29:18.266 18:16:07 -- common/autotest_common.sh@904 -- # return 0 00:29:18.266 18:16:07 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:29:18.266 18:16:07 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:29:18.266 18:16:07 -- common/autotest_common.sh@901 -- # local max=10 00:29:18.266 18:16:07 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:18.266 18:16:07 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:29:18.266 18:16:07 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:18.266 18:16:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:18.266 18:16:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.266 18:16:07 -- common/autotest_common.sh@10 -- # set +x 00:29:18.266 18:16:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:18.266 18:16:07 -- host/discovery.sh@55 -- # sort 00:29:18.266 18:16:07 -- host/discovery.sh@55 -- # xargs 00:29:18.266 18:16:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.266 18:16:07 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:29:18.266 18:16:07 -- common/autotest_common.sh@904 -- # return 0 00:29:18.266 18:16:07 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:29:18.266 18:16:07 -- host/discovery.sh@79 -- # expected_count=2 00:29:18.266 18:16:07 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:18.266 18:16:07 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:18.266 18:16:07 -- common/autotest_common.sh@901 -- # local max=10 00:29:18.266 18:16:07 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:18.266 18:16:07 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:18.266 18:16:07 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:18.266 18:16:07 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:18.266 18:16:07 -- host/discovery.sh@74 -- # jq '. | length' 00:29:18.266 18:16:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.266 18:16:07 -- common/autotest_common.sh@10 -- # set +x 00:29:18.524 18:16:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.524 18:16:07 -- host/discovery.sh@74 -- # notification_count=2 00:29:18.524 18:16:07 -- host/discovery.sh@75 -- # notify_id=4 00:29:18.524 18:16:07 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:18.524 18:16:07 -- common/autotest_common.sh@904 -- # return 0 00:29:18.524 18:16:07 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:18.525 18:16:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.525 18:16:07 -- common/autotest_common.sh@10 -- # set +x 00:29:19.458 [2024-04-15 18:16:08.281085] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:19.458 [2024-04-15 18:16:08.281110] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:19.458 [2024-04-15 18:16:08.281133] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:19.458 [2024-04-15 18:16:08.367392] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:19.716 [2024-04-15 18:16:08.435444] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:19.716 [2024-04-15 18:16:08.435483] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:19.716 18:16:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.716 18:16:08 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:19.716 18:16:08 -- common/autotest_common.sh@638 -- # local es=0 00:29:19.717 18:16:08 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:19.717 18:16:08 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:19.717 18:16:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:19.717 18:16:08 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:19.717 18:16:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:19.717 18:16:08 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:19.717 18:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.717 18:16:08 -- common/autotest_common.sh@10 -- # set +x 00:29:19.717 request: 00:29:19.717 { 00:29:19.717 "name": "nvme", 00:29:19.717 "trtype": "tcp", 00:29:19.717 "traddr": "10.0.0.2", 00:29:19.717 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:19.717 "adrfam": "ipv4", 00:29:19.717 "trsvcid": "8009", 00:29:19.717 "wait_for_attach": true, 00:29:19.717 "method": "bdev_nvme_start_discovery", 00:29:19.717 "req_id": 1 00:29:19.717 } 00:29:19.717 Got JSON-RPC error response 00:29:19.717 response: 00:29:19.717 { 00:29:19.717 "code": -17, 00:29:19.717 "message": "File exists" 00:29:19.717 } 00:29:19.717 18:16:08 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:19.717 18:16:08 -- common/autotest_common.sh@641 -- # es=1 00:29:19.717 18:16:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:19.717 18:16:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:19.717 18:16:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:19.717 18:16:08 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:29:19.717 18:16:08 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:19.717 18:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.717 18:16:08 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:19.717 18:16:08 -- common/autotest_common.sh@10 -- # set +x 00:29:19.717 18:16:08 -- host/discovery.sh@67 -- # sort 00:29:19.717 18:16:08 -- host/discovery.sh@67 -- # xargs 00:29:19.717 18:16:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.717 18:16:08 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:29:19.717 18:16:08 -- host/discovery.sh@146 -- # get_bdev_list 00:29:19.717 18:16:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:19.717 18:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.717 18:16:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:19.717 18:16:08 -- common/autotest_common.sh@10 -- # set +x 00:29:19.717 18:16:08 -- host/discovery.sh@55 -- # sort 00:29:19.717 18:16:08 -- host/discovery.sh@55 -- # xargs 00:29:19.717 18:16:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.717 18:16:08 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:19.717 18:16:08 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:19.717 18:16:08 -- common/autotest_common.sh@638 -- # local es=0 00:29:19.717 18:16:08 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:19.717 18:16:08 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:19.717 18:16:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:19.717 18:16:08 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:19.717 18:16:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:19.717 18:16:08 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:19.717 18:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.717 18:16:08 -- common/autotest_common.sh@10 -- # set +x 00:29:19.717 request: 00:29:19.717 { 00:29:19.717 "name": "nvme_second", 00:29:19.717 "trtype": "tcp", 00:29:19.717 "traddr": "10.0.0.2", 00:29:19.717 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:19.717 "adrfam": "ipv4", 00:29:19.717 "trsvcid": "8009", 00:29:19.717 "wait_for_attach": true, 00:29:19.717 "method": "bdev_nvme_start_discovery", 00:29:19.717 "req_id": 1 00:29:19.717 } 00:29:19.717 Got JSON-RPC error response 00:29:19.717 response: 00:29:19.717 { 00:29:19.717 "code": -17, 00:29:19.717 "message": "File exists" 00:29:19.717 } 00:29:19.717 18:16:08 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:19.717 18:16:08 -- common/autotest_common.sh@641 -- # es=1 00:29:19.717 18:16:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:19.717 18:16:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:19.717 18:16:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:19.717 18:16:08 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:29:19.717 18:16:08 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:19.717 18:16:08 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:19.717 18:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.717 18:16:08 -- common/autotest_common.sh@10 -- # set +x 00:29:19.717 18:16:08 -- host/discovery.sh@67 -- # sort 00:29:19.717 18:16:08 -- host/discovery.sh@67 -- # xargs 00:29:19.717 18:16:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.717 18:16:08 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:29:19.717 18:16:08 -- host/discovery.sh@152 -- # get_bdev_list 00:29:19.717 18:16:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:19.717 18:16:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:19.717 18:16:08 -- host/discovery.sh@55 -- # sort 00:29:19.717 18:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.717 18:16:08 -- common/autotest_common.sh@10 -- # set +x 00:29:19.717 18:16:08 -- host/discovery.sh@55 -- # xargs 00:29:19.717 18:16:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.717 18:16:08 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:19.717 18:16:08 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:19.717 18:16:08 -- common/autotest_common.sh@638 -- # local es=0 00:29:19.717 18:16:08 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:19.717 18:16:08 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:19.717 18:16:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:19.717 18:16:08 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:19.717 18:16:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:19.717 18:16:08 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:19.717 18:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.717 18:16:08 -- common/autotest_common.sh@10 -- # set +x 00:29:21.095 [2024-04-15 18:16:09.652150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.095 [2024-04-15 18:16:09.652351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.095 [2024-04-15 18:16:09.652382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x89d130 with addr=10.0.0.2, port=8010 00:29:21.095 [2024-04-15 18:16:09.652406] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:21.095 [2024-04-15 18:16:09.652424] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:21.095 [2024-04-15 18:16:09.652439] bdev_nvme.c:6960:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:22.030 [2024-04-15 18:16:10.654594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.030 [2024-04-15 18:16:10.654834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.030 [2024-04-15 18:16:10.654877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b5b40 with addr=10.0.0.2, port=8010 00:29:22.030 [2024-04-15 18:16:10.654904] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:22.030 [2024-04-15 18:16:10.654921] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:22.030 [2024-04-15 18:16:10.654936] bdev_nvme.c:6960:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:22.966 [2024-04-15 18:16:11.656756] bdev_nvme.c:6941:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:22.966 request: 00:29:22.966 { 00:29:22.966 "name": "nvme_second", 00:29:22.966 "trtype": "tcp", 00:29:22.966 "traddr": "10.0.0.2", 00:29:22.966 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:22.966 "adrfam": "ipv4", 00:29:22.966 "trsvcid": "8010", 00:29:22.966 "attach_timeout_ms": 3000, 00:29:22.966 "method": "bdev_nvme_start_discovery", 00:29:22.966 "req_id": 1 00:29:22.966 } 00:29:22.966 Got JSON-RPC error response 00:29:22.966 response: 00:29:22.967 { 00:29:22.967 "code": -110, 00:29:22.967 "message": "Connection timed out" 00:29:22.967 } 00:29:22.967 18:16:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:22.967 18:16:11 -- common/autotest_common.sh@641 -- # es=1 00:29:22.967 18:16:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:22.967 18:16:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:22.967 18:16:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:22.967 18:16:11 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:29:22.967 18:16:11 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:22.967 18:16:11 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:22.967 18:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.967 18:16:11 -- common/autotest_common.sh@10 -- # set +x 00:29:22.967 18:16:11 -- host/discovery.sh@67 -- # sort 00:29:22.967 18:16:11 -- host/discovery.sh@67 -- # xargs 00:29:22.967 18:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.967 18:16:11 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:29:22.967 18:16:11 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:29:22.967 18:16:11 -- host/discovery.sh@161 -- # kill 3431161 00:29:22.967 18:16:11 -- host/discovery.sh@162 -- # nvmftestfini 00:29:22.967 18:16:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:22.967 18:16:11 -- nvmf/common.sh@117 -- # sync 00:29:22.967 18:16:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:22.967 18:16:11 -- nvmf/common.sh@120 -- # set +e 00:29:22.967 18:16:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:22.967 18:16:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:22.967 rmmod nvme_tcp 00:29:22.967 rmmod nvme_fabrics 00:29:22.967 rmmod nvme_keyring 00:29:22.967 18:16:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:22.967 18:16:11 -- nvmf/common.sh@124 -- # set -e 00:29:22.967 18:16:11 -- nvmf/common.sh@125 -- # return 0 00:29:22.967 18:16:11 -- nvmf/common.sh@478 -- # '[' -n 3431135 ']' 00:29:22.967 18:16:11 -- nvmf/common.sh@479 -- # killprocess 3431135 00:29:22.967 18:16:11 -- common/autotest_common.sh@936 -- # '[' -z 3431135 ']' 00:29:22.967 18:16:11 -- common/autotest_common.sh@940 -- # kill -0 3431135 00:29:22.967 18:16:11 -- common/autotest_common.sh@941 -- # uname 00:29:22.967 18:16:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:22.967 18:16:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3431135 00:29:22.967 18:16:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:22.967 18:16:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:22.967 18:16:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3431135' 00:29:22.967 killing process with pid 3431135 00:29:22.967 18:16:11 -- common/autotest_common.sh@955 -- # kill 3431135 00:29:22.967 18:16:11 -- common/autotest_common.sh@960 -- # wait 3431135 00:29:23.227 18:16:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:23.227 18:16:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:23.227 18:16:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:23.227 18:16:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:23.227 18:16:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:23.227 18:16:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.227 18:16:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:23.227 18:16:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.764 18:16:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:25.764 00:29:25.764 real 0m13.564s 00:29:25.764 user 0m19.662s 00:29:25.764 sys 0m3.110s 00:29:25.764 18:16:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:25.764 18:16:14 -- common/autotest_common.sh@10 -- # set +x 00:29:25.764 ************************************ 00:29:25.764 END TEST nvmf_discovery 00:29:25.764 ************************************ 00:29:25.764 18:16:14 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:25.764 18:16:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:25.764 18:16:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:25.764 18:16:14 -- common/autotest_common.sh@10 -- # set +x 00:29:25.764 ************************************ 00:29:25.764 START TEST nvmf_discovery_remove_ifc 00:29:25.764 ************************************ 00:29:25.764 18:16:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:25.764 * Looking for test storage... 00:29:25.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:25.764 18:16:14 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.764 18:16:14 -- nvmf/common.sh@7 -- # uname -s 00:29:25.764 18:16:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.764 18:16:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.764 18:16:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.764 18:16:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.764 18:16:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.764 18:16:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.764 18:16:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.764 18:16:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.764 18:16:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.764 18:16:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.764 18:16:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:25.764 18:16:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:25.764 18:16:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.764 18:16:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.764 18:16:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.764 18:16:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:25.764 18:16:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.764 18:16:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.764 18:16:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.764 18:16:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.765 18:16:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.765 18:16:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.765 18:16:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.765 18:16:14 -- paths/export.sh@5 -- # export PATH 00:29:25.765 18:16:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.765 18:16:14 -- nvmf/common.sh@47 -- # : 0 00:29:25.765 18:16:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:25.765 18:16:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:25.765 18:16:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:25.765 18:16:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.765 18:16:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.765 18:16:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:25.765 18:16:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:25.765 18:16:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:25.765 18:16:14 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:25.765 18:16:14 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:25.765 18:16:14 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:25.765 18:16:14 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:25.765 18:16:14 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:25.765 18:16:14 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:25.765 18:16:14 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:25.765 18:16:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:25.765 18:16:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.765 18:16:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:25.765 18:16:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:25.765 18:16:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:25.765 18:16:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.765 18:16:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:25.765 18:16:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.765 18:16:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:29:25.765 18:16:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:25.765 18:16:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:25.765 18:16:14 -- common/autotest_common.sh@10 -- # set +x 00:29:27.674 18:16:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:27.674 18:16:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:27.674 18:16:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:27.674 18:16:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:27.674 18:16:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:27.674 18:16:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:27.674 18:16:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:27.674 18:16:16 -- nvmf/common.sh@295 -- # net_devs=() 00:29:27.674 18:16:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:27.674 18:16:16 -- nvmf/common.sh@296 -- # e810=() 00:29:27.674 18:16:16 -- nvmf/common.sh@296 -- # local -ga e810 00:29:27.674 18:16:16 -- nvmf/common.sh@297 -- # x722=() 00:29:27.674 18:16:16 -- nvmf/common.sh@297 -- # local -ga x722 00:29:27.674 18:16:16 -- nvmf/common.sh@298 -- # mlx=() 00:29:27.674 18:16:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:27.674 18:16:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:27.674 18:16:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:27.674 18:16:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:27.674 18:16:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:27.674 18:16:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:27.674 18:16:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:27.674 18:16:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:27.674 18:16:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:27.674 18:16:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:27.674 18:16:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:27.674 18:16:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:27.674 18:16:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:27.674 18:16:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:27.674 18:16:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:27.674 18:16:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:27.674 18:16:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:27.674 18:16:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:27.674 18:16:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:27.674 18:16:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:27.674 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:27.674 18:16:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:27.674 18:16:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:27.674 18:16:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.674 18:16:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.674 18:16:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:27.674 18:16:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:27.674 18:16:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:27.674 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:27.674 18:16:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:27.674 18:16:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:27.674 18:16:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.674 18:16:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.674 18:16:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:27.674 18:16:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:27.674 18:16:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:27.674 18:16:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:27.674 18:16:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:27.674 18:16:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.674 18:16:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:27.674 18:16:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.674 18:16:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:27.674 Found net devices under 0000:84:00.0: cvl_0_0 00:29:27.674 18:16:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.674 18:16:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:27.674 18:16:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.674 18:16:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:27.674 18:16:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.674 18:16:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:27.674 Found net devices under 0000:84:00.1: cvl_0_1 00:29:27.674 18:16:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.674 18:16:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:27.674 18:16:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:27.674 18:16:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:27.674 18:16:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:27.674 18:16:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:27.674 18:16:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:27.674 18:16:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:27.674 18:16:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:27.674 18:16:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:27.674 18:16:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:27.674 18:16:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:27.674 18:16:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:27.674 18:16:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:27.674 18:16:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:27.674 18:16:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:27.674 18:16:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:27.674 18:16:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:27.674 18:16:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:27.674 18:16:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:27.674 18:16:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:27.674 18:16:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:27.674 18:16:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:27.935 18:16:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:27.935 18:16:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:27.935 18:16:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:27.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:27.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:29:27.935 00:29:27.935 --- 10.0.0.2 ping statistics --- 00:29:27.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.935 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:29:27.935 18:16:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:27.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:27.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:29:27.935 00:29:27.935 --- 10.0.0.1 ping statistics --- 00:29:27.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.935 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:29:27.935 18:16:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:27.935 18:16:16 -- nvmf/common.sh@411 -- # return 0 00:29:27.935 18:16:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:27.935 18:16:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.935 18:16:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:27.935 18:16:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:27.935 18:16:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.935 18:16:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:27.935 18:16:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:27.935 18:16:16 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:27.935 18:16:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:27.935 18:16:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:27.935 18:16:16 -- common/autotest_common.sh@10 -- # set +x 00:29:27.935 18:16:16 -- nvmf/common.sh@470 -- # nvmfpid=3434285 00:29:27.935 18:16:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:27.935 18:16:16 -- nvmf/common.sh@471 -- # waitforlisten 3434285 00:29:27.935 18:16:16 -- common/autotest_common.sh@817 -- # '[' -z 3434285 ']' 00:29:27.935 18:16:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.935 18:16:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:27.935 18:16:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.935 18:16:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:27.935 18:16:16 -- common/autotest_common.sh@10 -- # set +x 00:29:27.935 [2024-04-15 18:16:16.755855] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:29:27.935 [2024-04-15 18:16:16.755956] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.935 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.935 [2024-04-15 18:16:16.838177] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.195 [2024-04-15 18:16:16.936204] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.195 [2024-04-15 18:16:16.936263] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.195 [2024-04-15 18:16:16.936281] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.195 [2024-04-15 18:16:16.936296] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.195 [2024-04-15 18:16:16.936308] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.195 [2024-04-15 18:16:16.936339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.454 18:16:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:28.454 18:16:17 -- common/autotest_common.sh@850 -- # return 0 00:29:28.454 18:16:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:28.454 18:16:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:28.454 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:29:28.454 18:16:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.454 18:16:17 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:28.454 18:16:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.454 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:29:28.454 [2024-04-15 18:16:17.266224] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.454 [2024-04-15 18:16:17.274420] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:28.454 null0 00:29:28.454 [2024-04-15 18:16:17.306354] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.454 18:16:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.454 18:16:17 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3434358 00:29:28.454 18:16:17 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:28.454 18:16:17 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3434358 /tmp/host.sock 00:29:28.454 18:16:17 -- common/autotest_common.sh@817 -- # '[' -z 3434358 ']' 00:29:28.454 18:16:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:29:28.454 18:16:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:28.454 18:16:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:28.454 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:28.454 18:16:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:28.454 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:29:28.454 [2024-04-15 18:16:17.372642] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:29:28.454 [2024-04-15 18:16:17.372721] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434358 ] 00:29:28.454 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.711 [2024-04-15 18:16:17.440696] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.711 [2024-04-15 18:16:17.531594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.711 18:16:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:28.711 18:16:17 -- common/autotest_common.sh@850 -- # return 0 00:29:28.711 18:16:17 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:28.711 18:16:17 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:28.711 18:16:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.711 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:29:28.711 18:16:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.712 18:16:17 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:28.712 18:16:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.712 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:29:28.974 18:16:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.974 18:16:17 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:28.974 18:16:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.974 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:29:29.957 [2024-04-15 18:16:18.754087] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:29.957 [2024-04-15 18:16:18.754117] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:29.957 [2024-04-15 18:16:18.754144] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:29.957 [2024-04-15 18:16:18.880565] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:30.214 [2024-04-15 18:16:19.065565] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:30.214 [2024-04-15 18:16:19.065632] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:30.214 [2024-04-15 18:16:19.065675] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:30.214 [2024-04-15 18:16:19.065702] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:30.214 [2024-04-15 18:16:19.065733] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:30.214 18:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.214 18:16:19 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:29:30.214 18:16:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:30.214 18:16:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:30.214 18:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.214 18:16:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:30.214 18:16:19 -- common/autotest_common.sh@10 -- # set +x 00:29:30.214 18:16:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:30.214 18:16:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:30.214 [2024-04-15 18:16:19.072922] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2211cf0 was disconnected and freed. delete nvme_qpair. 00:29:30.214 18:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.215 18:16:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:29:30.215 18:16:19 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:29:30.215 18:16:19 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:29:30.215 18:16:19 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:29:30.215 18:16:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:30.215 18:16:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:30.215 18:16:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:30.215 18:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.215 18:16:19 -- common/autotest_common.sh@10 -- # set +x 00:29:30.215 18:16:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:30.215 18:16:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:30.472 18:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.472 18:16:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:30.472 18:16:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:31.408 18:16:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:31.408 18:16:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:31.408 18:16:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:31.408 18:16:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.408 18:16:20 -- common/autotest_common.sh@10 -- # set +x 00:29:31.408 18:16:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:31.408 18:16:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:31.408 18:16:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.408 18:16:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:31.408 18:16:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:32.344 18:16:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:32.344 18:16:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:32.344 18:16:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.344 18:16:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:32.344 18:16:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:32.344 18:16:21 -- common/autotest_common.sh@10 -- # set +x 00:29:32.344 18:16:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:32.344 18:16:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.602 18:16:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:32.602 18:16:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:33.539 18:16:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:33.539 18:16:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:33.539 18:16:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:33.539 18:16:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.539 18:16:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:33.539 18:16:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:33.539 18:16:22 -- common/autotest_common.sh@10 -- # set +x 00:29:33.539 18:16:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.539 18:16:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:33.539 18:16:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:34.475 18:16:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:34.475 18:16:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:34.475 18:16:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:34.475 18:16:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.475 18:16:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:34.475 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:29:34.475 18:16:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:34.475 18:16:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.733 18:16:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:34.733 18:16:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:35.670 18:16:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:35.670 18:16:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:35.670 18:16:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:35.670 18:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.670 18:16:24 -- common/autotest_common.sh@10 -- # set +x 00:29:35.670 18:16:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:35.670 18:16:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:35.670 18:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.670 [2024-04-15 18:16:24.506586] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:29:35.670 [2024-04-15 18:16:24.506661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:35.670 [2024-04-15 18:16:24.506698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.670 [2024-04-15 18:16:24.506720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:35.670 [2024-04-15 18:16:24.506736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.670 [2024-04-15 18:16:24.506753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:35.670 [2024-04-15 18:16:24.506777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.670 [2024-04-15 18:16:24.506794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:35.670 [2024-04-15 18:16:24.506810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.670 [2024-04-15 18:16:24.506826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:35.670 [2024-04-15 18:16:24.506842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.670 [2024-04-15 18:16:24.506859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d8e10 is same with the state(5) to be set 00:29:35.670 [2024-04-15 18:16:24.516603] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d8e10 (9): Bad file descriptor 00:29:35.670 [2024-04-15 18:16:24.526653] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:35.670 18:16:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:35.670 18:16:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:36.606 18:16:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:36.606 18:16:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:36.606 18:16:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:36.606 18:16:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.606 18:16:25 -- common/autotest_common.sh@10 -- # set +x 00:29:36.606 18:16:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:36.606 18:16:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:36.865 [2024-04-15 18:16:25.578079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:37.802 [2024-04-15 18:16:26.599111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:37.802 [2024-04-15 18:16:26.599196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d8e10 with addr=10.0.0.2, port=4420 00:29:37.854 [2024-04-15 18:16:26.599227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d8e10 is same with the state(5) to be set 00:29:37.854 [2024-04-15 18:16:26.599748] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d8e10 (9): Bad file descriptor 00:29:37.854 [2024-04-15 18:16:26.599799] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.854 [2024-04-15 18:16:26.599849] bdev_nvme.c:6649:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:29:37.854 [2024-04-15 18:16:26.599898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.854 [2024-04-15 18:16:26.599923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.854 [2024-04-15 18:16:26.599964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.854 [2024-04-15 18:16:26.599982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.854 [2024-04-15 18:16:26.599998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.854 [2024-04-15 18:16:26.600013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.854 [2024-04-15 18:16:26.600029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.854 [2024-04-15 18:16:26.600045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.854 [2024-04-15 18:16:26.600082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.854 [2024-04-15 18:16:26.600101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.854 [2024-04-15 18:16:26.600117] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:29:37.854 [2024-04-15 18:16:26.600270] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d82b0 (9): Bad file descriptor 00:29:37.854 [2024-04-15 18:16:26.601293] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:37.854 [2024-04-15 18:16:26.601321] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:29:37.854 18:16:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:37.854 18:16:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:37.854 18:16:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:38.791 18:16:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:38.791 18:16:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:38.791 18:16:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:38.791 18:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.791 18:16:27 -- common/autotest_common.sh@10 -- # set +x 00:29:38.791 18:16:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:38.791 18:16:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:38.792 18:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.792 18:16:27 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:38.792 18:16:27 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:38.792 18:16:27 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:38.792 18:16:27 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:38.792 18:16:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:38.792 18:16:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:38.792 18:16:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:38.792 18:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.792 18:16:27 -- common/autotest_common.sh@10 -- # set +x 00:29:38.792 18:16:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:38.792 18:16:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:38.792 18:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:39.050 18:16:27 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:39.050 18:16:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:39.987 [2024-04-15 18:16:28.661227] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:39.987 [2024-04-15 18:16:28.661261] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:39.987 [2024-04-15 18:16:28.661287] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:39.987 [2024-04-15 18:16:28.747572] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:29:39.987 18:16:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:39.987 18:16:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:39.987 18:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:39.987 18:16:28 -- common/autotest_common.sh@10 -- # set +x 00:29:39.987 18:16:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:39.987 18:16:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:39.987 18:16:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:39.987 18:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:39.987 18:16:28 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:39.987 18:16:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:39.987 [2024-04-15 18:16:28.849753] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:39.987 [2024-04-15 18:16:28.849805] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:39.987 [2024-04-15 18:16:28.849843] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:39.987 [2024-04-15 18:16:28.849870] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:29:39.987 [2024-04-15 18:16:28.849886] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:39.987 [2024-04-15 18:16:28.858574] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x21f5490 was disconnected and freed. delete nvme_qpair. 00:29:40.922 18:16:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:40.922 18:16:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:40.922 18:16:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:40.922 18:16:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:40.922 18:16:29 -- common/autotest_common.sh@10 -- # set +x 00:29:40.922 18:16:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:40.922 18:16:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:40.922 18:16:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.182 18:16:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:41.182 18:16:29 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:41.182 18:16:29 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3434358 00:29:41.182 18:16:29 -- common/autotest_common.sh@936 -- # '[' -z 3434358 ']' 00:29:41.182 18:16:29 -- common/autotest_common.sh@940 -- # kill -0 3434358 00:29:41.182 18:16:29 -- common/autotest_common.sh@941 -- # uname 00:29:41.182 18:16:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:41.182 18:16:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3434358 00:29:41.183 18:16:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:41.183 18:16:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:41.183 18:16:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3434358' 00:29:41.183 killing process with pid 3434358 00:29:41.183 18:16:29 -- common/autotest_common.sh@955 -- # kill 3434358 00:29:41.183 18:16:29 -- common/autotest_common.sh@960 -- # wait 3434358 00:29:41.443 18:16:30 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:41.443 18:16:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:41.443 18:16:30 -- nvmf/common.sh@117 -- # sync 00:29:41.443 18:16:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:41.443 18:16:30 -- nvmf/common.sh@120 -- # set +e 00:29:41.443 18:16:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:41.443 18:16:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:41.443 rmmod nvme_tcp 00:29:41.443 rmmod nvme_fabrics 00:29:41.443 rmmod nvme_keyring 00:29:41.443 18:16:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:41.443 18:16:30 -- nvmf/common.sh@124 -- # set -e 00:29:41.443 18:16:30 -- nvmf/common.sh@125 -- # return 0 00:29:41.443 18:16:30 -- nvmf/common.sh@478 -- # '[' -n 3434285 ']' 00:29:41.443 18:16:30 -- nvmf/common.sh@479 -- # killprocess 3434285 00:29:41.443 18:16:30 -- common/autotest_common.sh@936 -- # '[' -z 3434285 ']' 00:29:41.443 18:16:30 -- common/autotest_common.sh@940 -- # kill -0 3434285 00:29:41.443 18:16:30 -- common/autotest_common.sh@941 -- # uname 00:29:41.443 18:16:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:41.443 18:16:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3434285 00:29:41.443 18:16:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:41.443 18:16:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:41.443 18:16:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3434285' 00:29:41.443 killing process with pid 3434285 00:29:41.443 18:16:30 -- common/autotest_common.sh@955 -- # kill 3434285 00:29:41.443 18:16:30 -- common/autotest_common.sh@960 -- # wait 3434285 00:29:41.702 18:16:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:41.702 18:16:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:41.702 18:16:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:41.702 18:16:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:41.702 18:16:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:41.702 18:16:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.702 18:16:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:41.702 18:16:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.611 18:16:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:43.611 00:29:43.611 real 0m18.273s 00:29:43.611 user 0m25.399s 00:29:43.611 sys 0m3.392s 00:29:43.611 18:16:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:43.611 18:16:32 -- common/autotest_common.sh@10 -- # set +x 00:29:43.611 ************************************ 00:29:43.611 END TEST nvmf_discovery_remove_ifc 00:29:43.611 ************************************ 00:29:43.870 18:16:32 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:43.870 18:16:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:43.870 18:16:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:43.870 18:16:32 -- common/autotest_common.sh@10 -- # set +x 00:29:43.870 ************************************ 00:29:43.870 START TEST nvmf_identify_kernel_target 00:29:43.870 ************************************ 00:29:43.870 18:16:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:43.870 * Looking for test storage... 00:29:43.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:43.870 18:16:32 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:43.870 18:16:32 -- nvmf/common.sh@7 -- # uname -s 00:29:43.870 18:16:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:43.870 18:16:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:43.870 18:16:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:43.870 18:16:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:43.870 18:16:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:43.870 18:16:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:43.870 18:16:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:43.870 18:16:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:43.870 18:16:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:43.870 18:16:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:43.870 18:16:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:43.870 18:16:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:43.870 18:16:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:43.870 18:16:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:43.870 18:16:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:43.870 18:16:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:43.870 18:16:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:43.870 18:16:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:43.870 18:16:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:43.870 18:16:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:43.870 18:16:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.870 18:16:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.870 18:16:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.870 18:16:32 -- paths/export.sh@5 -- # export PATH 00:29:43.870 18:16:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.870 18:16:32 -- nvmf/common.sh@47 -- # : 0 00:29:43.870 18:16:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:43.870 18:16:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:43.870 18:16:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:43.870 18:16:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:43.870 18:16:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:43.870 18:16:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:43.870 18:16:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:43.870 18:16:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:43.870 18:16:32 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:29:43.870 18:16:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:43.870 18:16:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:43.870 18:16:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:43.870 18:16:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:43.870 18:16:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:43.870 18:16:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.870 18:16:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:43.870 18:16:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.870 18:16:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:29:43.870 18:16:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:43.870 18:16:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:43.870 18:16:32 -- common/autotest_common.sh@10 -- # set +x 00:29:46.412 18:16:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:46.412 18:16:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:46.412 18:16:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:46.412 18:16:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:46.412 18:16:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:46.412 18:16:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:46.412 18:16:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:46.412 18:16:35 -- nvmf/common.sh@295 -- # net_devs=() 00:29:46.412 18:16:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:46.412 18:16:35 -- nvmf/common.sh@296 -- # e810=() 00:29:46.412 18:16:35 -- nvmf/common.sh@296 -- # local -ga e810 00:29:46.412 18:16:35 -- nvmf/common.sh@297 -- # x722=() 00:29:46.412 18:16:35 -- nvmf/common.sh@297 -- # local -ga x722 00:29:46.412 18:16:35 -- nvmf/common.sh@298 -- # mlx=() 00:29:46.412 18:16:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:46.412 18:16:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:46.412 18:16:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:46.412 18:16:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:46.412 18:16:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:46.412 18:16:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:46.412 18:16:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:46.412 18:16:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:46.412 18:16:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:46.412 18:16:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:46.412 18:16:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:46.412 18:16:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:46.412 18:16:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:46.412 18:16:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:46.412 18:16:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:46.412 18:16:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:46.412 18:16:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:46.412 18:16:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:46.412 18:16:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:46.412 18:16:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:46.412 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:46.412 18:16:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:46.412 18:16:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:46.412 18:16:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.412 18:16:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.412 18:16:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:46.412 18:16:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:46.412 18:16:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:46.412 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:46.412 18:16:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:46.412 18:16:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:46.412 18:16:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.412 18:16:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.412 18:16:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:46.412 18:16:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:46.412 18:16:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:46.412 18:16:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:46.412 18:16:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:46.413 18:16:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.413 18:16:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:46.413 18:16:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.413 18:16:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:46.413 Found net devices under 0000:84:00.0: cvl_0_0 00:29:46.413 18:16:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.413 18:16:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:46.413 18:16:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.413 18:16:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:46.413 18:16:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.413 18:16:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:46.413 Found net devices under 0000:84:00.1: cvl_0_1 00:29:46.413 18:16:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.413 18:16:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:46.413 18:16:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:46.413 18:16:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:46.413 18:16:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:46.413 18:16:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:46.413 18:16:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:46.413 18:16:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:46.413 18:16:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:46.413 18:16:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:46.413 18:16:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:46.413 18:16:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:46.413 18:16:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:46.413 18:16:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:46.413 18:16:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:46.413 18:16:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:46.413 18:16:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:46.413 18:16:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:46.413 18:16:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:46.413 18:16:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:46.413 18:16:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:46.413 18:16:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:46.413 18:16:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:46.413 18:16:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:46.413 18:16:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:46.413 18:16:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:46.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:46.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:29:46.413 00:29:46.413 --- 10.0.0.2 ping statistics --- 00:29:46.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.413 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:29:46.413 18:16:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:46.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:46.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:29:46.413 00:29:46.413 --- 10.0.0.1 ping statistics --- 00:29:46.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.413 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:29:46.413 18:16:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:46.413 18:16:35 -- nvmf/common.sh@411 -- # return 0 00:29:46.413 18:16:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:46.413 18:16:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:46.413 18:16:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:46.413 18:16:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:46.413 18:16:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:46.413 18:16:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:46.413 18:16:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:46.413 18:16:35 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:29:46.413 18:16:35 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:29:46.413 18:16:35 -- nvmf/common.sh@717 -- # local ip 00:29:46.413 18:16:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:46.413 18:16:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:46.413 18:16:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.413 18:16:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.413 18:16:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:46.413 18:16:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.413 18:16:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:46.413 18:16:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:46.413 18:16:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:46.413 18:16:35 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:29:46.413 18:16:35 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:46.413 18:16:35 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:46.413 18:16:35 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:29:46.413 18:16:35 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:46.413 18:16:35 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:46.413 18:16:35 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:46.413 18:16:35 -- nvmf/common.sh@628 -- # local block nvme 00:29:46.413 18:16:35 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:29:46.413 18:16:35 -- nvmf/common.sh@631 -- # modprobe nvmet 00:29:46.413 18:16:35 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:46.413 18:16:35 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:47.791 Waiting for block devices as requested 00:29:47.791 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:29:47.791 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:47.791 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:48.049 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:48.049 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:48.049 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:48.049 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:48.049 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:48.308 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:48.308 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:48.308 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:48.566 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:48.566 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:48.566 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:48.566 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:48.824 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:48.824 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:48.824 18:16:37 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:48.824 18:16:37 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:48.824 18:16:37 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:29:48.824 18:16:37 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:48.824 18:16:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:48.824 18:16:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:48.824 18:16:37 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:29:48.824 18:16:37 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:48.824 18:16:37 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:48.824 No valid GPT data, bailing 00:29:48.824 18:16:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:48.824 18:16:37 -- scripts/common.sh@391 -- # pt= 00:29:48.824 18:16:37 -- scripts/common.sh@392 -- # return 1 00:29:48.824 18:16:37 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:29:48.824 18:16:37 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:29:48.824 18:16:37 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:48.824 18:16:37 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:48.824 18:16:37 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:48.824 18:16:37 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:48.824 18:16:37 -- nvmf/common.sh@656 -- # echo 1 00:29:48.824 18:16:37 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:29:48.824 18:16:37 -- nvmf/common.sh@658 -- # echo 1 00:29:48.824 18:16:37 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:29:48.824 18:16:37 -- nvmf/common.sh@661 -- # echo tcp 00:29:48.824 18:16:37 -- nvmf/common.sh@662 -- # echo 4420 00:29:48.824 18:16:37 -- nvmf/common.sh@663 -- # echo ipv4 00:29:48.824 18:16:37 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:48.824 18:16:37 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:29:48.824 00:29:48.824 Discovery Log Number of Records 2, Generation counter 2 00:29:48.824 =====Discovery Log Entry 0====== 00:29:48.824 trtype: tcp 00:29:48.824 adrfam: ipv4 00:29:48.824 subtype: current discovery subsystem 00:29:48.824 treq: not specified, sq flow control disable supported 00:29:48.824 portid: 1 00:29:48.824 trsvcid: 4420 00:29:48.824 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:48.824 traddr: 10.0.0.1 00:29:48.824 eflags: none 00:29:48.824 sectype: none 00:29:48.824 =====Discovery Log Entry 1====== 00:29:48.824 trtype: tcp 00:29:48.824 adrfam: ipv4 00:29:48.824 subtype: nvme subsystem 00:29:48.824 treq: not specified, sq flow control disable supported 00:29:48.824 portid: 1 00:29:48.824 trsvcid: 4420 00:29:48.824 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:48.824 traddr: 10.0.0.1 00:29:48.824 eflags: none 00:29:48.824 sectype: none 00:29:48.824 18:16:37 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:29:48.824 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:29:49.083 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.083 ===================================================== 00:29:49.083 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:49.083 ===================================================== 00:29:49.083 Controller Capabilities/Features 00:29:49.083 ================================ 00:29:49.083 Vendor ID: 0000 00:29:49.083 Subsystem Vendor ID: 0000 00:29:49.083 Serial Number: ffc488df9881cce2e8a6 00:29:49.083 Model Number: Linux 00:29:49.083 Firmware Version: 6.7.0-68 00:29:49.083 Recommended Arb Burst: 0 00:29:49.083 IEEE OUI Identifier: 00 00 00 00:29:49.083 Multi-path I/O 00:29:49.083 May have multiple subsystem ports: No 00:29:49.083 May have multiple controllers: No 00:29:49.083 Associated with SR-IOV VF: No 00:29:49.083 Max Data Transfer Size: Unlimited 00:29:49.083 Max Number of Namespaces: 0 00:29:49.083 Max Number of I/O Queues: 1024 00:29:49.083 NVMe Specification Version (VS): 1.3 00:29:49.083 NVMe Specification Version (Identify): 1.3 00:29:49.083 Maximum Queue Entries: 1024 00:29:49.083 Contiguous Queues Required: No 00:29:49.083 Arbitration Mechanisms Supported 00:29:49.083 Weighted Round Robin: Not Supported 00:29:49.083 Vendor Specific: Not Supported 00:29:49.083 Reset Timeout: 7500 ms 00:29:49.083 Doorbell Stride: 4 bytes 00:29:49.083 NVM Subsystem Reset: Not Supported 00:29:49.083 Command Sets Supported 00:29:49.083 NVM Command Set: Supported 00:29:49.083 Boot Partition: Not Supported 00:29:49.083 Memory Page Size Minimum: 4096 bytes 00:29:49.083 Memory Page Size Maximum: 4096 bytes 00:29:49.083 Persistent Memory Region: Not Supported 00:29:49.083 Optional Asynchronous Events Supported 00:29:49.083 Namespace Attribute Notices: Not Supported 00:29:49.083 Firmware Activation Notices: Not Supported 00:29:49.083 ANA Change Notices: Not Supported 00:29:49.083 PLE Aggregate Log Change Notices: Not Supported 00:29:49.083 LBA Status Info Alert Notices: Not Supported 00:29:49.083 EGE Aggregate Log Change Notices: Not Supported 00:29:49.083 Normal NVM Subsystem Shutdown event: Not Supported 00:29:49.083 Zone Descriptor Change Notices: Not Supported 00:29:49.083 Discovery Log Change Notices: Supported 00:29:49.083 Controller Attributes 00:29:49.083 128-bit Host Identifier: Not Supported 00:29:49.083 Non-Operational Permissive Mode: Not Supported 00:29:49.083 NVM Sets: Not Supported 00:29:49.083 Read Recovery Levels: Not Supported 00:29:49.083 Endurance Groups: Not Supported 00:29:49.083 Predictable Latency Mode: Not Supported 00:29:49.083 Traffic Based Keep ALive: Not Supported 00:29:49.083 Namespace Granularity: Not Supported 00:29:49.083 SQ Associations: Not Supported 00:29:49.083 UUID List: Not Supported 00:29:49.083 Multi-Domain Subsystem: Not Supported 00:29:49.083 Fixed Capacity Management: Not Supported 00:29:49.083 Variable Capacity Management: Not Supported 00:29:49.083 Delete Endurance Group: Not Supported 00:29:49.083 Delete NVM Set: Not Supported 00:29:49.083 Extended LBA Formats Supported: Not Supported 00:29:49.084 Flexible Data Placement Supported: Not Supported 00:29:49.084 00:29:49.084 Controller Memory Buffer Support 00:29:49.084 ================================ 00:29:49.084 Supported: No 00:29:49.084 00:29:49.084 Persistent Memory Region Support 00:29:49.084 ================================ 00:29:49.084 Supported: No 00:29:49.084 00:29:49.084 Admin Command Set Attributes 00:29:49.084 ============================ 00:29:49.084 Security Send/Receive: Not Supported 00:29:49.084 Format NVM: Not Supported 00:29:49.084 Firmware Activate/Download: Not Supported 00:29:49.084 Namespace Management: Not Supported 00:29:49.084 Device Self-Test: Not Supported 00:29:49.084 Directives: Not Supported 00:29:49.084 NVMe-MI: Not Supported 00:29:49.084 Virtualization Management: Not Supported 00:29:49.084 Doorbell Buffer Config: Not Supported 00:29:49.084 Get LBA Status Capability: Not Supported 00:29:49.084 Command & Feature Lockdown Capability: Not Supported 00:29:49.084 Abort Command Limit: 1 00:29:49.084 Async Event Request Limit: 1 00:29:49.084 Number of Firmware Slots: N/A 00:29:49.084 Firmware Slot 1 Read-Only: N/A 00:29:49.084 Firmware Activation Without Reset: N/A 00:29:49.084 Multiple Update Detection Support: N/A 00:29:49.084 Firmware Update Granularity: No Information Provided 00:29:49.084 Per-Namespace SMART Log: No 00:29:49.084 Asymmetric Namespace Access Log Page: Not Supported 00:29:49.084 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:49.084 Command Effects Log Page: Not Supported 00:29:49.084 Get Log Page Extended Data: Supported 00:29:49.084 Telemetry Log Pages: Not Supported 00:29:49.084 Persistent Event Log Pages: Not Supported 00:29:49.084 Supported Log Pages Log Page: May Support 00:29:49.084 Commands Supported & Effects Log Page: Not Supported 00:29:49.084 Feature Identifiers & Effects Log Page:May Support 00:29:49.084 NVMe-MI Commands & Effects Log Page: May Support 00:29:49.084 Data Area 4 for Telemetry Log: Not Supported 00:29:49.084 Error Log Page Entries Supported: 1 00:29:49.084 Keep Alive: Not Supported 00:29:49.084 00:29:49.084 NVM Command Set Attributes 00:29:49.084 ========================== 00:29:49.084 Submission Queue Entry Size 00:29:49.084 Max: 1 00:29:49.084 Min: 1 00:29:49.084 Completion Queue Entry Size 00:29:49.084 Max: 1 00:29:49.084 Min: 1 00:29:49.084 Number of Namespaces: 0 00:29:49.084 Compare Command: Not Supported 00:29:49.084 Write Uncorrectable Command: Not Supported 00:29:49.084 Dataset Management Command: Not Supported 00:29:49.084 Write Zeroes Command: Not Supported 00:29:49.084 Set Features Save Field: Not Supported 00:29:49.084 Reservations: Not Supported 00:29:49.084 Timestamp: Not Supported 00:29:49.084 Copy: Not Supported 00:29:49.084 Volatile Write Cache: Not Present 00:29:49.084 Atomic Write Unit (Normal): 1 00:29:49.084 Atomic Write Unit (PFail): 1 00:29:49.084 Atomic Compare & Write Unit: 1 00:29:49.084 Fused Compare & Write: Not Supported 00:29:49.084 Scatter-Gather List 00:29:49.084 SGL Command Set: Supported 00:29:49.084 SGL Keyed: Not Supported 00:29:49.084 SGL Bit Bucket Descriptor: Not Supported 00:29:49.084 SGL Metadata Pointer: Not Supported 00:29:49.084 Oversized SGL: Not Supported 00:29:49.084 SGL Metadata Address: Not Supported 00:29:49.084 SGL Offset: Supported 00:29:49.084 Transport SGL Data Block: Not Supported 00:29:49.084 Replay Protected Memory Block: Not Supported 00:29:49.084 00:29:49.084 Firmware Slot Information 00:29:49.084 ========================= 00:29:49.084 Active slot: 0 00:29:49.084 00:29:49.084 00:29:49.084 Error Log 00:29:49.084 ========= 00:29:49.084 00:29:49.084 Active Namespaces 00:29:49.084 ================= 00:29:49.084 Discovery Log Page 00:29:49.084 ================== 00:29:49.084 Generation Counter: 2 00:29:49.084 Number of Records: 2 00:29:49.084 Record Format: 0 00:29:49.084 00:29:49.084 Discovery Log Entry 0 00:29:49.084 ---------------------- 00:29:49.084 Transport Type: 3 (TCP) 00:29:49.084 Address Family: 1 (IPv4) 00:29:49.084 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:49.084 Entry Flags: 00:29:49.084 Duplicate Returned Information: 0 00:29:49.084 Explicit Persistent Connection Support for Discovery: 0 00:29:49.084 Transport Requirements: 00:29:49.084 Secure Channel: Not Specified 00:29:49.084 Port ID: 1 (0x0001) 00:29:49.084 Controller ID: 65535 (0xffff) 00:29:49.084 Admin Max SQ Size: 32 00:29:49.084 Transport Service Identifier: 4420 00:29:49.084 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:49.084 Transport Address: 10.0.0.1 00:29:49.084 Discovery Log Entry 1 00:29:49.084 ---------------------- 00:29:49.084 Transport Type: 3 (TCP) 00:29:49.084 Address Family: 1 (IPv4) 00:29:49.084 Subsystem Type: 2 (NVM Subsystem) 00:29:49.084 Entry Flags: 00:29:49.084 Duplicate Returned Information: 0 00:29:49.084 Explicit Persistent Connection Support for Discovery: 0 00:29:49.084 Transport Requirements: 00:29:49.084 Secure Channel: Not Specified 00:29:49.084 Port ID: 1 (0x0001) 00:29:49.084 Controller ID: 65535 (0xffff) 00:29:49.084 Admin Max SQ Size: 32 00:29:49.084 Transport Service Identifier: 4420 00:29:49.084 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:29:49.084 Transport Address: 10.0.0.1 00:29:49.084 18:16:37 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:49.084 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.084 get_feature(0x01) failed 00:29:49.084 get_feature(0x02) failed 00:29:49.084 get_feature(0x04) failed 00:29:49.084 ===================================================== 00:29:49.084 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:49.084 ===================================================== 00:29:49.084 Controller Capabilities/Features 00:29:49.084 ================================ 00:29:49.084 Vendor ID: 0000 00:29:49.084 Subsystem Vendor ID: 0000 00:29:49.084 Serial Number: 0802a5aefbcd298491a6 00:29:49.084 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:29:49.084 Firmware Version: 6.7.0-68 00:29:49.084 Recommended Arb Burst: 6 00:29:49.084 IEEE OUI Identifier: 00 00 00 00:29:49.084 Multi-path I/O 00:29:49.084 May have multiple subsystem ports: Yes 00:29:49.084 May have multiple controllers: Yes 00:29:49.084 Associated with SR-IOV VF: No 00:29:49.084 Max Data Transfer Size: Unlimited 00:29:49.084 Max Number of Namespaces: 1024 00:29:49.084 Max Number of I/O Queues: 128 00:29:49.084 NVMe Specification Version (VS): 1.3 00:29:49.084 NVMe Specification Version (Identify): 1.3 00:29:49.084 Maximum Queue Entries: 1024 00:29:49.084 Contiguous Queues Required: No 00:29:49.084 Arbitration Mechanisms Supported 00:29:49.084 Weighted Round Robin: Not Supported 00:29:49.084 Vendor Specific: Not Supported 00:29:49.084 Reset Timeout: 7500 ms 00:29:49.084 Doorbell Stride: 4 bytes 00:29:49.084 NVM Subsystem Reset: Not Supported 00:29:49.084 Command Sets Supported 00:29:49.084 NVM Command Set: Supported 00:29:49.084 Boot Partition: Not Supported 00:29:49.084 Memory Page Size Minimum: 4096 bytes 00:29:49.084 Memory Page Size Maximum: 4096 bytes 00:29:49.084 Persistent Memory Region: Not Supported 00:29:49.084 Optional Asynchronous Events Supported 00:29:49.084 Namespace Attribute Notices: Supported 00:29:49.084 Firmware Activation Notices: Not Supported 00:29:49.084 ANA Change Notices: Supported 00:29:49.084 PLE Aggregate Log Change Notices: Not Supported 00:29:49.084 LBA Status Info Alert Notices: Not Supported 00:29:49.084 EGE Aggregate Log Change Notices: Not Supported 00:29:49.084 Normal NVM Subsystem Shutdown event: Not Supported 00:29:49.084 Zone Descriptor Change Notices: Not Supported 00:29:49.084 Discovery Log Change Notices: Not Supported 00:29:49.084 Controller Attributes 00:29:49.084 128-bit Host Identifier: Supported 00:29:49.084 Non-Operational Permissive Mode: Not Supported 00:29:49.084 NVM Sets: Not Supported 00:29:49.084 Read Recovery Levels: Not Supported 00:29:49.084 Endurance Groups: Not Supported 00:29:49.084 Predictable Latency Mode: Not Supported 00:29:49.084 Traffic Based Keep ALive: Supported 00:29:49.084 Namespace Granularity: Not Supported 00:29:49.084 SQ Associations: Not Supported 00:29:49.084 UUID List: Not Supported 00:29:49.084 Multi-Domain Subsystem: Not Supported 00:29:49.084 Fixed Capacity Management: Not Supported 00:29:49.084 Variable Capacity Management: Not Supported 00:29:49.084 Delete Endurance Group: Not Supported 00:29:49.084 Delete NVM Set: Not Supported 00:29:49.084 Extended LBA Formats Supported: Not Supported 00:29:49.084 Flexible Data Placement Supported: Not Supported 00:29:49.084 00:29:49.084 Controller Memory Buffer Support 00:29:49.084 ================================ 00:29:49.084 Supported: No 00:29:49.084 00:29:49.084 Persistent Memory Region Support 00:29:49.084 ================================ 00:29:49.084 Supported: No 00:29:49.084 00:29:49.084 Admin Command Set Attributes 00:29:49.084 ============================ 00:29:49.085 Security Send/Receive: Not Supported 00:29:49.085 Format NVM: Not Supported 00:29:49.085 Firmware Activate/Download: Not Supported 00:29:49.085 Namespace Management: Not Supported 00:29:49.085 Device Self-Test: Not Supported 00:29:49.085 Directives: Not Supported 00:29:49.085 NVMe-MI: Not Supported 00:29:49.085 Virtualization Management: Not Supported 00:29:49.085 Doorbell Buffer Config: Not Supported 00:29:49.085 Get LBA Status Capability: Not Supported 00:29:49.085 Command & Feature Lockdown Capability: Not Supported 00:29:49.085 Abort Command Limit: 4 00:29:49.085 Async Event Request Limit: 4 00:29:49.085 Number of Firmware Slots: N/A 00:29:49.085 Firmware Slot 1 Read-Only: N/A 00:29:49.085 Firmware Activation Without Reset: N/A 00:29:49.085 Multiple Update Detection Support: N/A 00:29:49.085 Firmware Update Granularity: No Information Provided 00:29:49.085 Per-Namespace SMART Log: Yes 00:29:49.085 Asymmetric Namespace Access Log Page: Supported 00:29:49.085 ANA Transition Time : 10 sec 00:29:49.085 00:29:49.085 Asymmetric Namespace Access Capabilities 00:29:49.085 ANA Optimized State : Supported 00:29:49.085 ANA Non-Optimized State : Supported 00:29:49.085 ANA Inaccessible State : Supported 00:29:49.085 ANA Persistent Loss State : Supported 00:29:49.085 ANA Change State : Supported 00:29:49.085 ANAGRPID is not changed : No 00:29:49.085 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:29:49.085 00:29:49.085 ANA Group Identifier Maximum : 128 00:29:49.085 Number of ANA Group Identifiers : 128 00:29:49.085 Max Number of Allowed Namespaces : 1024 00:29:49.085 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:29:49.085 Command Effects Log Page: Supported 00:29:49.085 Get Log Page Extended Data: Supported 00:29:49.085 Telemetry Log Pages: Not Supported 00:29:49.085 Persistent Event Log Pages: Not Supported 00:29:49.085 Supported Log Pages Log Page: May Support 00:29:49.085 Commands Supported & Effects Log Page: Not Supported 00:29:49.085 Feature Identifiers & Effects Log Page:May Support 00:29:49.085 NVMe-MI Commands & Effects Log Page: May Support 00:29:49.085 Data Area 4 for Telemetry Log: Not Supported 00:29:49.085 Error Log Page Entries Supported: 128 00:29:49.085 Keep Alive: Supported 00:29:49.085 Keep Alive Granularity: 1000 ms 00:29:49.085 00:29:49.085 NVM Command Set Attributes 00:29:49.085 ========================== 00:29:49.085 Submission Queue Entry Size 00:29:49.085 Max: 64 00:29:49.085 Min: 64 00:29:49.085 Completion Queue Entry Size 00:29:49.085 Max: 16 00:29:49.085 Min: 16 00:29:49.085 Number of Namespaces: 1024 00:29:49.085 Compare Command: Not Supported 00:29:49.085 Write Uncorrectable Command: Not Supported 00:29:49.085 Dataset Management Command: Supported 00:29:49.085 Write Zeroes Command: Supported 00:29:49.085 Set Features Save Field: Not Supported 00:29:49.085 Reservations: Not Supported 00:29:49.085 Timestamp: Not Supported 00:29:49.085 Copy: Not Supported 00:29:49.085 Volatile Write Cache: Present 00:29:49.085 Atomic Write Unit (Normal): 1 00:29:49.085 Atomic Write Unit (PFail): 1 00:29:49.085 Atomic Compare & Write Unit: 1 00:29:49.085 Fused Compare & Write: Not Supported 00:29:49.085 Scatter-Gather List 00:29:49.085 SGL Command Set: Supported 00:29:49.085 SGL Keyed: Not Supported 00:29:49.085 SGL Bit Bucket Descriptor: Not Supported 00:29:49.085 SGL Metadata Pointer: Not Supported 00:29:49.085 Oversized SGL: Not Supported 00:29:49.085 SGL Metadata Address: Not Supported 00:29:49.085 SGL Offset: Supported 00:29:49.085 Transport SGL Data Block: Not Supported 00:29:49.085 Replay Protected Memory Block: Not Supported 00:29:49.085 00:29:49.085 Firmware Slot Information 00:29:49.085 ========================= 00:29:49.085 Active slot: 0 00:29:49.085 00:29:49.085 Asymmetric Namespace Access 00:29:49.085 =========================== 00:29:49.085 Change Count : 0 00:29:49.085 Number of ANA Group Descriptors : 1 00:29:49.085 ANA Group Descriptor : 0 00:29:49.085 ANA Group ID : 1 00:29:49.085 Number of NSID Values : 1 00:29:49.085 Change Count : 0 00:29:49.085 ANA State : 1 00:29:49.085 Namespace Identifier : 1 00:29:49.085 00:29:49.085 Commands Supported and Effects 00:29:49.085 ============================== 00:29:49.085 Admin Commands 00:29:49.085 -------------- 00:29:49.085 Get Log Page (02h): Supported 00:29:49.085 Identify (06h): Supported 00:29:49.085 Abort (08h): Supported 00:29:49.085 Set Features (09h): Supported 00:29:49.085 Get Features (0Ah): Supported 00:29:49.085 Asynchronous Event Request (0Ch): Supported 00:29:49.085 Keep Alive (18h): Supported 00:29:49.085 I/O Commands 00:29:49.085 ------------ 00:29:49.085 Flush (00h): Supported 00:29:49.085 Write (01h): Supported LBA-Change 00:29:49.085 Read (02h): Supported 00:29:49.085 Write Zeroes (08h): Supported LBA-Change 00:29:49.085 Dataset Management (09h): Supported 00:29:49.085 00:29:49.085 Error Log 00:29:49.085 ========= 00:29:49.085 Entry: 0 00:29:49.085 Error Count: 0x3 00:29:49.085 Submission Queue Id: 0x0 00:29:49.085 Command Id: 0x5 00:29:49.085 Phase Bit: 0 00:29:49.085 Status Code: 0x2 00:29:49.085 Status Code Type: 0x0 00:29:49.085 Do Not Retry: 1 00:29:49.085 Error Location: 0x28 00:29:49.085 LBA: 0x0 00:29:49.085 Namespace: 0x0 00:29:49.085 Vendor Log Page: 0x0 00:29:49.085 ----------- 00:29:49.085 Entry: 1 00:29:49.085 Error Count: 0x2 00:29:49.085 Submission Queue Id: 0x0 00:29:49.085 Command Id: 0x5 00:29:49.085 Phase Bit: 0 00:29:49.085 Status Code: 0x2 00:29:49.085 Status Code Type: 0x0 00:29:49.085 Do Not Retry: 1 00:29:49.085 Error Location: 0x28 00:29:49.085 LBA: 0x0 00:29:49.085 Namespace: 0x0 00:29:49.085 Vendor Log Page: 0x0 00:29:49.085 ----------- 00:29:49.085 Entry: 2 00:29:49.085 Error Count: 0x1 00:29:49.085 Submission Queue Id: 0x0 00:29:49.085 Command Id: 0x4 00:29:49.085 Phase Bit: 0 00:29:49.085 Status Code: 0x2 00:29:49.085 Status Code Type: 0x0 00:29:49.085 Do Not Retry: 1 00:29:49.085 Error Location: 0x28 00:29:49.085 LBA: 0x0 00:29:49.085 Namespace: 0x0 00:29:49.085 Vendor Log Page: 0x0 00:29:49.085 00:29:49.085 Number of Queues 00:29:49.085 ================ 00:29:49.085 Number of I/O Submission Queues: 128 00:29:49.085 Number of I/O Completion Queues: 128 00:29:49.085 00:29:49.085 ZNS Specific Controller Data 00:29:49.085 ============================ 00:29:49.085 Zone Append Size Limit: 0 00:29:49.085 00:29:49.085 00:29:49.085 Active Namespaces 00:29:49.085 ================= 00:29:49.085 get_feature(0x05) failed 00:29:49.085 Namespace ID:1 00:29:49.085 Command Set Identifier: NVM (00h) 00:29:49.085 Deallocate: Supported 00:29:49.085 Deallocated/Unwritten Error: Not Supported 00:29:49.085 Deallocated Read Value: Unknown 00:29:49.085 Deallocate in Write Zeroes: Not Supported 00:29:49.085 Deallocated Guard Field: 0xFFFF 00:29:49.085 Flush: Supported 00:29:49.085 Reservation: Not Supported 00:29:49.085 Namespace Sharing Capabilities: Multiple Controllers 00:29:49.085 Size (in LBAs): 1953525168 (931GiB) 00:29:49.085 Capacity (in LBAs): 1953525168 (931GiB) 00:29:49.085 Utilization (in LBAs): 1953525168 (931GiB) 00:29:49.085 UUID: cbc9ca27-1217-4e2b-ba7e-991e65d73cd1 00:29:49.085 Thin Provisioning: Not Supported 00:29:49.085 Per-NS Atomic Units: Yes 00:29:49.085 Atomic Boundary Size (Normal): 0 00:29:49.085 Atomic Boundary Size (PFail): 0 00:29:49.085 Atomic Boundary Offset: 0 00:29:49.085 NGUID/EUI64 Never Reused: No 00:29:49.085 ANA group ID: 1 00:29:49.085 Namespace Write Protected: No 00:29:49.085 Number of LBA Formats: 1 00:29:49.085 Current LBA Format: LBA Format #00 00:29:49.085 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:49.085 00:29:49.085 18:16:37 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:29:49.085 18:16:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:49.085 18:16:37 -- nvmf/common.sh@117 -- # sync 00:29:49.085 18:16:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:49.085 18:16:37 -- nvmf/common.sh@120 -- # set +e 00:29:49.085 18:16:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:49.085 18:16:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:49.085 rmmod nvme_tcp 00:29:49.085 rmmod nvme_fabrics 00:29:49.085 18:16:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:49.085 18:16:37 -- nvmf/common.sh@124 -- # set -e 00:29:49.085 18:16:37 -- nvmf/common.sh@125 -- # return 0 00:29:49.085 18:16:37 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:29:49.085 18:16:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:49.085 18:16:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:49.085 18:16:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:49.085 18:16:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:49.085 18:16:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:49.085 18:16:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.085 18:16:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:49.085 18:16:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.617 18:16:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:51.617 18:16:39 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:51.617 18:16:39 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:51.617 18:16:39 -- nvmf/common.sh@675 -- # echo 0 00:29:51.617 18:16:39 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:51.617 18:16:39 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:51.617 18:16:39 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:51.617 18:16:39 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:51.617 18:16:39 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:29:51.617 18:16:39 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:29:51.617 18:16:40 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:52.552 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:52.552 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:52.552 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:52.553 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:52.553 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:52.553 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:52.553 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:52.553 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:52.553 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:52.553 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:52.553 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:52.553 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:52.553 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:52.553 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:52.553 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:52.553 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:53.486 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:29:53.743 00:29:53.743 real 0m9.787s 00:29:53.743 user 0m2.061s 00:29:53.743 sys 0m3.790s 00:29:53.744 18:16:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:53.744 18:16:42 -- common/autotest_common.sh@10 -- # set +x 00:29:53.744 ************************************ 00:29:53.744 END TEST nvmf_identify_kernel_target 00:29:53.744 ************************************ 00:29:53.744 18:16:42 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:53.744 18:16:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:53.744 18:16:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:53.744 18:16:42 -- common/autotest_common.sh@10 -- # set +x 00:29:53.744 ************************************ 00:29:53.744 START TEST nvmf_auth 00:29:53.744 ************************************ 00:29:53.744 18:16:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:54.002 * Looking for test storage... 00:29:54.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:54.002 18:16:42 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:54.002 18:16:42 -- nvmf/common.sh@7 -- # uname -s 00:29:54.002 18:16:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.002 18:16:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.002 18:16:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.002 18:16:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.002 18:16:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.002 18:16:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.002 18:16:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.002 18:16:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.002 18:16:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.002 18:16:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.002 18:16:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:54.002 18:16:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:54.002 18:16:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.002 18:16:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.002 18:16:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:54.002 18:16:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:54.002 18:16:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:54.002 18:16:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.002 18:16:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.002 18:16:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.002 18:16:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.002 18:16:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.002 18:16:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.002 18:16:42 -- paths/export.sh@5 -- # export PATH 00:29:54.002 18:16:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.002 18:16:42 -- nvmf/common.sh@47 -- # : 0 00:29:54.002 18:16:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:54.002 18:16:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:54.002 18:16:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.002 18:16:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.002 18:16:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.002 18:16:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:54.002 18:16:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:54.002 18:16:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:54.002 18:16:42 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:54.002 18:16:42 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:54.002 18:16:42 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:54.002 18:16:42 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:54.002 18:16:42 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:54.002 18:16:42 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:54.002 18:16:42 -- host/auth.sh@21 -- # keys=() 00:29:54.002 18:16:42 -- host/auth.sh@77 -- # nvmftestinit 00:29:54.002 18:16:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:54.002 18:16:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.002 18:16:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:54.002 18:16:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:54.002 18:16:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:54.002 18:16:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.002 18:16:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:54.002 18:16:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.002 18:16:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:29:54.002 18:16:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:54.002 18:16:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:54.002 18:16:42 -- common/autotest_common.sh@10 -- # set +x 00:29:56.534 18:16:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:56.534 18:16:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:56.534 18:16:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:56.534 18:16:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:56.534 18:16:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:56.534 18:16:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:56.534 18:16:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:56.534 18:16:44 -- nvmf/common.sh@295 -- # net_devs=() 00:29:56.534 18:16:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:56.534 18:16:44 -- nvmf/common.sh@296 -- # e810=() 00:29:56.534 18:16:44 -- nvmf/common.sh@296 -- # local -ga e810 00:29:56.534 18:16:44 -- nvmf/common.sh@297 -- # x722=() 00:29:56.534 18:16:44 -- nvmf/common.sh@297 -- # local -ga x722 00:29:56.534 18:16:44 -- nvmf/common.sh@298 -- # mlx=() 00:29:56.534 18:16:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:56.534 18:16:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.534 18:16:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.534 18:16:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.534 18:16:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.534 18:16:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.534 18:16:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.534 18:16:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.534 18:16:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.534 18:16:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.534 18:16:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.534 18:16:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.534 18:16:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:56.534 18:16:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:56.534 18:16:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:56.534 18:16:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:56.534 18:16:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:56.534 18:16:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:56.534 18:16:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:56.534 18:16:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:56.534 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:56.534 18:16:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:56.534 18:16:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:56.534 18:16:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.534 18:16:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.534 18:16:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:56.534 18:16:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:56.534 18:16:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:56.534 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:56.534 18:16:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:56.534 18:16:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:56.534 18:16:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.534 18:16:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.534 18:16:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:56.534 18:16:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:56.534 18:16:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:56.534 18:16:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:56.534 18:16:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:56.534 18:16:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.534 18:16:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:56.534 18:16:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.534 18:16:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:56.534 Found net devices under 0000:84:00.0: cvl_0_0 00:29:56.534 18:16:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.534 18:16:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:56.534 18:16:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.534 18:16:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:56.534 18:16:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.534 18:16:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:56.534 Found net devices under 0000:84:00.1: cvl_0_1 00:29:56.534 18:16:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.534 18:16:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:56.534 18:16:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:56.534 18:16:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:56.534 18:16:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:56.534 18:16:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:56.534 18:16:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.534 18:16:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.534 18:16:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.534 18:16:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:56.534 18:16:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.534 18:16:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.534 18:16:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:56.534 18:16:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.534 18:16:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.534 18:16:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:56.534 18:16:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:56.534 18:16:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.534 18:16:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.534 18:16:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.534 18:16:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.534 18:16:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:56.534 18:16:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.534 18:16:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.534 18:16:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.534 18:16:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:56.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:29:56.534 00:29:56.534 --- 10.0.0.2 ping statistics --- 00:29:56.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.534 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:29:56.534 18:16:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:29:56.535 00:29:56.535 --- 10.0.0.1 ping statistics --- 00:29:56.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.535 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:29:56.535 18:16:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.535 18:16:45 -- nvmf/common.sh@411 -- # return 0 00:29:56.535 18:16:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:56.535 18:16:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.535 18:16:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:56.535 18:16:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:56.535 18:16:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.535 18:16:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:56.535 18:16:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:56.535 18:16:45 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:29:56.535 18:16:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:56.535 18:16:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:56.535 18:16:45 -- common/autotest_common.sh@10 -- # set +x 00:29:56.535 18:16:45 -- nvmf/common.sh@470 -- # nvmfpid=3441607 00:29:56.535 18:16:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:56.535 18:16:45 -- nvmf/common.sh@471 -- # waitforlisten 3441607 00:29:56.535 18:16:45 -- common/autotest_common.sh@817 -- # '[' -z 3441607 ']' 00:29:56.535 18:16:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.535 18:16:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:56.535 18:16:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.535 18:16:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:56.535 18:16:45 -- common/autotest_common.sh@10 -- # set +x 00:29:56.535 18:16:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:56.535 18:16:45 -- common/autotest_common.sh@850 -- # return 0 00:29:56.535 18:16:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:56.535 18:16:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:56.535 18:16:45 -- common/autotest_common.sh@10 -- # set +x 00:29:56.535 18:16:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.535 18:16:45 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:56.535 18:16:45 -- host/auth.sh@81 -- # gen_key null 32 00:29:56.535 18:16:45 -- host/auth.sh@53 -- # local digest len file key 00:29:56.535 18:16:45 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:56.535 18:16:45 -- host/auth.sh@54 -- # local -A digests 00:29:56.535 18:16:45 -- host/auth.sh@56 -- # digest=null 00:29:56.535 18:16:45 -- host/auth.sh@56 -- # len=32 00:29:56.794 18:16:45 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:56.794 18:16:45 -- host/auth.sh@57 -- # key=9e42031dfaecf62926688e9928649d1c 00:29:56.794 18:16:45 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:29:56.794 18:16:45 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.kJW 00:29:56.794 18:16:45 -- host/auth.sh@59 -- # format_dhchap_key 9e42031dfaecf62926688e9928649d1c 0 00:29:56.794 18:16:45 -- nvmf/common.sh@708 -- # format_key DHHC-1 9e42031dfaecf62926688e9928649d1c 0 00:29:56.794 18:16:45 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:56.794 18:16:45 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:29:56.794 18:16:45 -- nvmf/common.sh@693 -- # key=9e42031dfaecf62926688e9928649d1c 00:29:56.794 18:16:45 -- nvmf/common.sh@693 -- # digest=0 00:29:56.794 18:16:45 -- nvmf/common.sh@694 -- # python - 00:29:56.794 18:16:45 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.kJW 00:29:56.794 18:16:45 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.kJW 00:29:56.794 18:16:45 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.kJW 00:29:56.794 18:16:45 -- host/auth.sh@82 -- # gen_key null 48 00:29:56.794 18:16:45 -- host/auth.sh@53 -- # local digest len file key 00:29:56.794 18:16:45 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:56.794 18:16:45 -- host/auth.sh@54 -- # local -A digests 00:29:56.794 18:16:45 -- host/auth.sh@56 -- # digest=null 00:29:56.794 18:16:45 -- host/auth.sh@56 -- # len=48 00:29:56.794 18:16:45 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:56.794 18:16:45 -- host/auth.sh@57 -- # key=78ec7936d7f1490a54282b6a2ea888106805a6e79053490e 00:29:56.794 18:16:45 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:29:56.794 18:16:45 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.A2y 00:29:56.794 18:16:45 -- host/auth.sh@59 -- # format_dhchap_key 78ec7936d7f1490a54282b6a2ea888106805a6e79053490e 0 00:29:56.794 18:16:45 -- nvmf/common.sh@708 -- # format_key DHHC-1 78ec7936d7f1490a54282b6a2ea888106805a6e79053490e 0 00:29:56.794 18:16:45 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:56.794 18:16:45 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:29:56.794 18:16:45 -- nvmf/common.sh@693 -- # key=78ec7936d7f1490a54282b6a2ea888106805a6e79053490e 00:29:56.795 18:16:45 -- nvmf/common.sh@693 -- # digest=0 00:29:56.795 18:16:45 -- nvmf/common.sh@694 -- # python - 00:29:56.795 18:16:45 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.A2y 00:29:56.795 18:16:45 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.A2y 00:29:56.795 18:16:45 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.A2y 00:29:56.795 18:16:45 -- host/auth.sh@83 -- # gen_key sha256 32 00:29:56.795 18:16:45 -- host/auth.sh@53 -- # local digest len file key 00:29:56.795 18:16:45 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:56.795 18:16:45 -- host/auth.sh@54 -- # local -A digests 00:29:56.795 18:16:45 -- host/auth.sh@56 -- # digest=sha256 00:29:56.795 18:16:45 -- host/auth.sh@56 -- # len=32 00:29:56.795 18:16:45 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:56.795 18:16:45 -- host/auth.sh@57 -- # key=ea69599df1a24fe385719a3819a45e59 00:29:56.795 18:16:45 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:29:56.795 18:16:45 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.VbO 00:29:56.795 18:16:45 -- host/auth.sh@59 -- # format_dhchap_key ea69599df1a24fe385719a3819a45e59 1 00:29:56.795 18:16:45 -- nvmf/common.sh@708 -- # format_key DHHC-1 ea69599df1a24fe385719a3819a45e59 1 00:29:56.795 18:16:45 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:56.795 18:16:45 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:29:56.795 18:16:45 -- nvmf/common.sh@693 -- # key=ea69599df1a24fe385719a3819a45e59 00:29:56.795 18:16:45 -- nvmf/common.sh@693 -- # digest=1 00:29:56.795 18:16:45 -- nvmf/common.sh@694 -- # python - 00:29:56.795 18:16:45 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.VbO 00:29:56.795 18:16:45 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.VbO 00:29:56.795 18:16:45 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.VbO 00:29:56.795 18:16:45 -- host/auth.sh@84 -- # gen_key sha384 48 00:29:56.795 18:16:45 -- host/auth.sh@53 -- # local digest len file key 00:29:56.795 18:16:45 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:56.795 18:16:45 -- host/auth.sh@54 -- # local -A digests 00:29:56.795 18:16:45 -- host/auth.sh@56 -- # digest=sha384 00:29:56.795 18:16:45 -- host/auth.sh@56 -- # len=48 00:29:56.795 18:16:45 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:56.795 18:16:45 -- host/auth.sh@57 -- # key=ca18aa5bdb28ea034d2fa7c2423d559d93c051814bd1d1de 00:29:56.795 18:16:45 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:29:56.795 18:16:45 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.vMY 00:29:56.795 18:16:45 -- host/auth.sh@59 -- # format_dhchap_key ca18aa5bdb28ea034d2fa7c2423d559d93c051814bd1d1de 2 00:29:56.795 18:16:45 -- nvmf/common.sh@708 -- # format_key DHHC-1 ca18aa5bdb28ea034d2fa7c2423d559d93c051814bd1d1de 2 00:29:56.795 18:16:45 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:56.795 18:16:45 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:29:56.795 18:16:45 -- nvmf/common.sh@693 -- # key=ca18aa5bdb28ea034d2fa7c2423d559d93c051814bd1d1de 00:29:56.795 18:16:45 -- nvmf/common.sh@693 -- # digest=2 00:29:56.795 18:16:45 -- nvmf/common.sh@694 -- # python - 00:29:56.795 18:16:45 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.vMY 00:29:56.795 18:16:45 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.vMY 00:29:56.795 18:16:45 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.vMY 00:29:56.795 18:16:45 -- host/auth.sh@85 -- # gen_key sha512 64 00:29:56.795 18:16:45 -- host/auth.sh@53 -- # local digest len file key 00:29:56.795 18:16:45 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:56.795 18:16:45 -- host/auth.sh@54 -- # local -A digests 00:29:56.795 18:16:45 -- host/auth.sh@56 -- # digest=sha512 00:29:56.795 18:16:45 -- host/auth.sh@56 -- # len=64 00:29:57.053 18:16:45 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:57.053 18:16:45 -- host/auth.sh@57 -- # key=1096553170c706a14e6b388f63ae425886391db04cb82570dbaef85778d2a5cd 00:29:57.053 18:16:45 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:29:57.053 18:16:45 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.v6v 00:29:57.053 18:16:45 -- host/auth.sh@59 -- # format_dhchap_key 1096553170c706a14e6b388f63ae425886391db04cb82570dbaef85778d2a5cd 3 00:29:57.053 18:16:45 -- nvmf/common.sh@708 -- # format_key DHHC-1 1096553170c706a14e6b388f63ae425886391db04cb82570dbaef85778d2a5cd 3 00:29:57.053 18:16:45 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:57.053 18:16:45 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:29:57.053 18:16:45 -- nvmf/common.sh@693 -- # key=1096553170c706a14e6b388f63ae425886391db04cb82570dbaef85778d2a5cd 00:29:57.053 18:16:45 -- nvmf/common.sh@693 -- # digest=3 00:29:57.053 18:16:45 -- nvmf/common.sh@694 -- # python - 00:29:57.053 18:16:45 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.v6v 00:29:57.053 18:16:45 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.v6v 00:29:57.053 18:16:45 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.v6v 00:29:57.053 18:16:45 -- host/auth.sh@87 -- # waitforlisten 3441607 00:29:57.053 18:16:45 -- common/autotest_common.sh@817 -- # '[' -z 3441607 ']' 00:29:57.053 18:16:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.053 18:16:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:57.053 18:16:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.053 18:16:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:57.053 18:16:45 -- common/autotest_common.sh@10 -- # set +x 00:29:57.311 18:16:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:57.311 18:16:46 -- common/autotest_common.sh@850 -- # return 0 00:29:57.311 18:16:46 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:29:57.311 18:16:46 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kJW 00:29:57.311 18:16:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:57.311 18:16:46 -- common/autotest_common.sh@10 -- # set +x 00:29:57.311 18:16:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:57.311 18:16:46 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:29:57.311 18:16:46 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.A2y 00:29:57.311 18:16:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:57.311 18:16:46 -- common/autotest_common.sh@10 -- # set +x 00:29:57.311 18:16:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:57.311 18:16:46 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:29:57.311 18:16:46 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.VbO 00:29:57.311 18:16:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:57.311 18:16:46 -- common/autotest_common.sh@10 -- # set +x 00:29:57.311 18:16:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:57.311 18:16:46 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:29:57.311 18:16:46 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.vMY 00:29:57.311 18:16:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:57.311 18:16:46 -- common/autotest_common.sh@10 -- # set +x 00:29:57.311 18:16:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:57.311 18:16:46 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:29:57.311 18:16:46 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.v6v 00:29:57.311 18:16:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:57.311 18:16:46 -- common/autotest_common.sh@10 -- # set +x 00:29:57.311 18:16:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:57.311 18:16:46 -- host/auth.sh@92 -- # nvmet_auth_init 00:29:57.311 18:16:46 -- host/auth.sh@35 -- # get_main_ns_ip 00:29:57.311 18:16:46 -- nvmf/common.sh@717 -- # local ip 00:29:57.311 18:16:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:57.311 18:16:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:57.311 18:16:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.311 18:16:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.311 18:16:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:57.311 18:16:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.311 18:16:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:57.311 18:16:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:57.311 18:16:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:57.311 18:16:46 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:57.311 18:16:46 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:57.311 18:16:46 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:29:57.311 18:16:46 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:57.311 18:16:46 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:57.311 18:16:46 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:57.311 18:16:46 -- nvmf/common.sh@628 -- # local block nvme 00:29:57.311 18:16:46 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:29:57.311 18:16:46 -- nvmf/common.sh@631 -- # modprobe nvmet 00:29:57.311 18:16:46 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:57.311 18:16:46 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:58.686 Waiting for block devices as requested 00:29:58.686 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:29:58.943 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:58.943 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:58.943 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:58.944 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:59.202 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:59.202 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:59.202 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:59.202 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:59.460 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:59.460 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:59.460 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:59.718 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:59.718 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:59.718 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:59.718 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:59.975 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:00.233 18:16:49 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:00.233 18:16:49 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:00.233 18:16:49 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:30:00.233 18:16:49 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:00.233 18:16:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:00.233 18:16:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:00.233 18:16:49 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:30:00.233 18:16:49 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:00.233 18:16:49 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:00.491 No valid GPT data, bailing 00:30:00.491 18:16:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:00.491 18:16:49 -- scripts/common.sh@391 -- # pt= 00:30:00.491 18:16:49 -- scripts/common.sh@392 -- # return 1 00:30:00.491 18:16:49 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:30:00.491 18:16:49 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:30:00.491 18:16:49 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:00.491 18:16:49 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:00.491 18:16:49 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:00.491 18:16:49 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:30:00.491 18:16:49 -- nvmf/common.sh@656 -- # echo 1 00:30:00.491 18:16:49 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:30:00.492 18:16:49 -- nvmf/common.sh@658 -- # echo 1 00:30:00.492 18:16:49 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:30:00.492 18:16:49 -- nvmf/common.sh@661 -- # echo tcp 00:30:00.492 18:16:49 -- nvmf/common.sh@662 -- # echo 4420 00:30:00.492 18:16:49 -- nvmf/common.sh@663 -- # echo ipv4 00:30:00.492 18:16:49 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:00.492 18:16:49 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:30:00.492 00:30:00.492 Discovery Log Number of Records 2, Generation counter 2 00:30:00.492 =====Discovery Log Entry 0====== 00:30:00.492 trtype: tcp 00:30:00.492 adrfam: ipv4 00:30:00.492 subtype: current discovery subsystem 00:30:00.492 treq: not specified, sq flow control disable supported 00:30:00.492 portid: 1 00:30:00.492 trsvcid: 4420 00:30:00.492 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:00.492 traddr: 10.0.0.1 00:30:00.492 eflags: none 00:30:00.492 sectype: none 00:30:00.492 =====Discovery Log Entry 1====== 00:30:00.492 trtype: tcp 00:30:00.492 adrfam: ipv4 00:30:00.492 subtype: nvme subsystem 00:30:00.492 treq: not specified, sq flow control disable supported 00:30:00.492 portid: 1 00:30:00.492 trsvcid: 4420 00:30:00.492 subnqn: nqn.2024-02.io.spdk:cnode0 00:30:00.492 traddr: 10.0.0.1 00:30:00.492 eflags: none 00:30:00.492 sectype: none 00:30:00.492 18:16:49 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:00.492 18:16:49 -- host/auth.sh@37 -- # echo 0 00:30:00.492 18:16:49 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:00.492 18:16:49 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:00.492 18:16:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:00.492 18:16:49 -- host/auth.sh@44 -- # digest=sha256 00:30:00.492 18:16:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:00.492 18:16:49 -- host/auth.sh@44 -- # keyid=1 00:30:00.492 18:16:49 -- host/auth.sh@45 -- # key=DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:00.492 18:16:49 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:00.492 18:16:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:00.492 18:16:49 -- host/auth.sh@49 -- # echo DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:00.492 18:16:49 -- host/auth.sh@100 -- # IFS=, 00:30:00.492 18:16:49 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:30:00.492 18:16:49 -- host/auth.sh@100 -- # IFS=, 00:30:00.492 18:16:49 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:00.492 18:16:49 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:30:00.492 18:16:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:00.492 18:16:49 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:30:00.492 18:16:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:00.492 18:16:49 -- host/auth.sh@68 -- # keyid=1 00:30:00.492 18:16:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:00.492 18:16:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:00.492 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:30:00.492 18:16:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:00.492 18:16:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:00.492 18:16:49 -- nvmf/common.sh@717 -- # local ip 00:30:00.492 18:16:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:00.492 18:16:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:00.492 18:16:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.492 18:16:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.492 18:16:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:00.492 18:16:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.492 18:16:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:00.492 18:16:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:00.492 18:16:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:00.492 18:16:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:00.492 18:16:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:00.492 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:30:00.750 nvme0n1 00:30:00.750 18:16:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:00.750 18:16:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:00.750 18:16:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:00.750 18:16:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:00.750 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:30:00.750 18:16:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:00.750 18:16:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.750 18:16:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.750 18:16:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:00.750 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:30:00.750 18:16:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:00.750 18:16:49 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:30:00.750 18:16:49 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:00.750 18:16:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:00.750 18:16:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:30:00.750 18:16:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:00.750 18:16:49 -- host/auth.sh@44 -- # digest=sha256 00:30:00.750 18:16:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:00.750 18:16:49 -- host/auth.sh@44 -- # keyid=0 00:30:00.750 18:16:49 -- host/auth.sh@45 -- # key=DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:00.750 18:16:49 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:00.750 18:16:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:00.750 18:16:49 -- host/auth.sh@49 -- # echo DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:00.750 18:16:49 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:30:00.750 18:16:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:00.750 18:16:49 -- host/auth.sh@68 -- # digest=sha256 00:30:00.750 18:16:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:00.750 18:16:49 -- host/auth.sh@68 -- # keyid=0 00:30:00.750 18:16:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:00.750 18:16:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:00.750 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:30:00.750 18:16:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:00.750 18:16:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:00.750 18:16:49 -- nvmf/common.sh@717 -- # local ip 00:30:00.750 18:16:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:00.750 18:16:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:00.750 18:16:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.750 18:16:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.750 18:16:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:00.750 18:16:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.750 18:16:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:00.750 18:16:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:00.750 18:16:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:00.750 18:16:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:00.750 18:16:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:00.750 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:30:00.750 nvme0n1 00:30:00.750 18:16:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.026 18:16:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.026 18:16:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:01.026 18:16:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.026 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:30:01.026 18:16:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.026 18:16:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.026 18:16:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.026 18:16:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.026 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:30:01.026 18:16:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.026 18:16:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:01.026 18:16:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:01.026 18:16:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:01.026 18:16:49 -- host/auth.sh@44 -- # digest=sha256 00:30:01.026 18:16:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:01.026 18:16:49 -- host/auth.sh@44 -- # keyid=1 00:30:01.026 18:16:49 -- host/auth.sh@45 -- # key=DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:01.026 18:16:49 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:01.026 18:16:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:01.026 18:16:49 -- host/auth.sh@49 -- # echo DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:01.026 18:16:49 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:30:01.026 18:16:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:01.026 18:16:49 -- host/auth.sh@68 -- # digest=sha256 00:30:01.026 18:16:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:01.026 18:16:49 -- host/auth.sh@68 -- # keyid=1 00:30:01.026 18:16:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:01.026 18:16:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.026 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:30:01.026 18:16:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.026 18:16:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:01.026 18:16:49 -- nvmf/common.sh@717 -- # local ip 00:30:01.026 18:16:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:01.026 18:16:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:01.026 18:16:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.027 18:16:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.027 18:16:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:01.027 18:16:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.027 18:16:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:01.027 18:16:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:01.027 18:16:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:01.027 18:16:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:01.027 18:16:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.027 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:30:01.027 nvme0n1 00:30:01.027 18:16:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.027 18:16:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.027 18:16:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.027 18:16:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:01.027 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:30:01.027 18:16:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.027 18:16:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.027 18:16:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.027 18:16:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.027 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:30:01.027 18:16:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.027 18:16:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:01.027 18:16:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:01.027 18:16:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:01.027 18:16:49 -- host/auth.sh@44 -- # digest=sha256 00:30:01.027 18:16:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:01.027 18:16:49 -- host/auth.sh@44 -- # keyid=2 00:30:01.027 18:16:49 -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:01.027 18:16:49 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:01.027 18:16:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:01.027 18:16:49 -- host/auth.sh@49 -- # echo DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:01.027 18:16:49 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:30:01.027 18:16:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:01.027 18:16:49 -- host/auth.sh@68 -- # digest=sha256 00:30:01.027 18:16:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:01.027 18:16:49 -- host/auth.sh@68 -- # keyid=2 00:30:01.027 18:16:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:01.027 18:16:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.027 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:30:01.287 18:16:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.287 18:16:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:01.287 18:16:49 -- nvmf/common.sh@717 -- # local ip 00:30:01.287 18:16:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:01.287 18:16:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:01.287 18:16:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.287 18:16:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.287 18:16:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:01.287 18:16:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.287 18:16:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:01.287 18:16:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:01.287 18:16:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:01.287 18:16:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:01.287 18:16:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.287 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:30:01.287 nvme0n1 00:30:01.287 18:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.287 18:16:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.287 18:16:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:01.287 18:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.287 18:16:50 -- common/autotest_common.sh@10 -- # set +x 00:30:01.287 18:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.287 18:16:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.287 18:16:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.287 18:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.287 18:16:50 -- common/autotest_common.sh@10 -- # set +x 00:30:01.287 18:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.287 18:16:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:01.287 18:16:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:30:01.287 18:16:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:01.287 18:16:50 -- host/auth.sh@44 -- # digest=sha256 00:30:01.287 18:16:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:01.287 18:16:50 -- host/auth.sh@44 -- # keyid=3 00:30:01.288 18:16:50 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:01.288 18:16:50 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:01.288 18:16:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:01.288 18:16:50 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:01.288 18:16:50 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:30:01.288 18:16:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:01.288 18:16:50 -- host/auth.sh@68 -- # digest=sha256 00:30:01.288 18:16:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:01.288 18:16:50 -- host/auth.sh@68 -- # keyid=3 00:30:01.288 18:16:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:01.288 18:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.288 18:16:50 -- common/autotest_common.sh@10 -- # set +x 00:30:01.288 18:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.288 18:16:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:01.288 18:16:50 -- nvmf/common.sh@717 -- # local ip 00:30:01.288 18:16:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:01.288 18:16:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:01.288 18:16:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.288 18:16:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.288 18:16:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:01.288 18:16:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.288 18:16:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:01.288 18:16:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:01.288 18:16:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:01.288 18:16:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:01.288 18:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.288 18:16:50 -- common/autotest_common.sh@10 -- # set +x 00:30:01.545 nvme0n1 00:30:01.545 18:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.545 18:16:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.545 18:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.545 18:16:50 -- common/autotest_common.sh@10 -- # set +x 00:30:01.545 18:16:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:01.545 18:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.545 18:16:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.545 18:16:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.545 18:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.545 18:16:50 -- common/autotest_common.sh@10 -- # set +x 00:30:01.545 18:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.545 18:16:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:01.545 18:16:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:30:01.545 18:16:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:01.545 18:16:50 -- host/auth.sh@44 -- # digest=sha256 00:30:01.545 18:16:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:01.545 18:16:50 -- host/auth.sh@44 -- # keyid=4 00:30:01.545 18:16:50 -- host/auth.sh@45 -- # key=DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:01.545 18:16:50 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:01.545 18:16:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:01.545 18:16:50 -- host/auth.sh@49 -- # echo DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:01.545 18:16:50 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:30:01.545 18:16:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:01.545 18:16:50 -- host/auth.sh@68 -- # digest=sha256 00:30:01.545 18:16:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:01.545 18:16:50 -- host/auth.sh@68 -- # keyid=4 00:30:01.545 18:16:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:01.545 18:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.545 18:16:50 -- common/autotest_common.sh@10 -- # set +x 00:30:01.545 18:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.545 18:16:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:01.545 18:16:50 -- nvmf/common.sh@717 -- # local ip 00:30:01.545 18:16:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:01.545 18:16:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:01.545 18:16:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.545 18:16:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.545 18:16:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:01.545 18:16:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.545 18:16:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:01.545 18:16:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:01.545 18:16:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:01.545 18:16:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:01.545 18:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.546 18:16:50 -- common/autotest_common.sh@10 -- # set +x 00:30:01.546 nvme0n1 00:30:01.546 18:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.546 18:16:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.546 18:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.546 18:16:50 -- common/autotest_common.sh@10 -- # set +x 00:30:01.546 18:16:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:01.804 18:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.804 18:16:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.804 18:16:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.804 18:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.804 18:16:50 -- common/autotest_common.sh@10 -- # set +x 00:30:01.804 18:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.804 18:16:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:01.804 18:16:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:01.804 18:16:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:30:01.804 18:16:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:01.804 18:16:50 -- host/auth.sh@44 -- # digest=sha256 00:30:01.804 18:16:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:01.804 18:16:50 -- host/auth.sh@44 -- # keyid=0 00:30:01.804 18:16:50 -- host/auth.sh@45 -- # key=DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:01.804 18:16:50 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:01.804 18:16:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:01.804 18:16:50 -- host/auth.sh@49 -- # echo DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:01.804 18:16:50 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:30:01.804 18:16:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:01.804 18:16:50 -- host/auth.sh@68 -- # digest=sha256 00:30:01.804 18:16:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:01.804 18:16:50 -- host/auth.sh@68 -- # keyid=0 00:30:01.804 18:16:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:01.804 18:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.804 18:16:50 -- common/autotest_common.sh@10 -- # set +x 00:30:01.804 18:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.804 18:16:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:01.804 18:16:50 -- nvmf/common.sh@717 -- # local ip 00:30:01.804 18:16:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:01.804 18:16:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:01.804 18:16:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.804 18:16:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.804 18:16:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:01.804 18:16:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.804 18:16:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:01.804 18:16:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:01.804 18:16:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:01.804 18:16:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:01.804 18:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.804 18:16:50 -- common/autotest_common.sh@10 -- # set +x 00:30:02.062 nvme0n1 00:30:02.062 18:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.062 18:16:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.062 18:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.062 18:16:50 -- common/autotest_common.sh@10 -- # set +x 00:30:02.062 18:16:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:02.062 18:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.062 18:16:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.062 18:16:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.062 18:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.062 18:16:50 -- common/autotest_common.sh@10 -- # set +x 00:30:02.062 18:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.062 18:16:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:02.062 18:16:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:30:02.062 18:16:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:02.062 18:16:50 -- host/auth.sh@44 -- # digest=sha256 00:30:02.062 18:16:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:02.062 18:16:50 -- host/auth.sh@44 -- # keyid=1 00:30:02.062 18:16:50 -- host/auth.sh@45 -- # key=DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:02.062 18:16:50 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:02.062 18:16:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:02.062 18:16:50 -- host/auth.sh@49 -- # echo DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:02.062 18:16:50 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:30:02.062 18:16:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:02.062 18:16:50 -- host/auth.sh@68 -- # digest=sha256 00:30:02.062 18:16:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:02.062 18:16:50 -- host/auth.sh@68 -- # keyid=1 00:30:02.062 18:16:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:02.062 18:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.062 18:16:50 -- common/autotest_common.sh@10 -- # set +x 00:30:02.062 18:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.062 18:16:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:02.062 18:16:50 -- nvmf/common.sh@717 -- # local ip 00:30:02.062 18:16:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:02.062 18:16:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:02.062 18:16:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.062 18:16:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.062 18:16:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:02.062 18:16:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.062 18:16:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:02.062 18:16:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:02.062 18:16:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:02.062 18:16:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:02.062 18:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.062 18:16:50 -- common/autotest_common.sh@10 -- # set +x 00:30:02.320 nvme0n1 00:30:02.320 18:16:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.320 18:16:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.320 18:16:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.320 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:30:02.320 18:16:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:02.320 18:16:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.320 18:16:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.320 18:16:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.320 18:16:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.320 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:30:02.320 18:16:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.320 18:16:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:02.320 18:16:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:30:02.320 18:16:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:02.320 18:16:51 -- host/auth.sh@44 -- # digest=sha256 00:30:02.320 18:16:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:02.320 18:16:51 -- host/auth.sh@44 -- # keyid=2 00:30:02.320 18:16:51 -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:02.320 18:16:51 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:02.320 18:16:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:02.320 18:16:51 -- host/auth.sh@49 -- # echo DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:02.320 18:16:51 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:30:02.320 18:16:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:02.320 18:16:51 -- host/auth.sh@68 -- # digest=sha256 00:30:02.320 18:16:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:02.320 18:16:51 -- host/auth.sh@68 -- # keyid=2 00:30:02.321 18:16:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:02.321 18:16:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.321 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:30:02.321 18:16:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.321 18:16:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:02.321 18:16:51 -- nvmf/common.sh@717 -- # local ip 00:30:02.321 18:16:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:02.321 18:16:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:02.321 18:16:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.321 18:16:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.321 18:16:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:02.321 18:16:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.321 18:16:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:02.321 18:16:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:02.321 18:16:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:02.321 18:16:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:02.321 18:16:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.321 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:30:02.321 nvme0n1 00:30:02.321 18:16:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.578 18:16:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.579 18:16:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:02.579 18:16:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.579 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:30:02.579 18:16:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.579 18:16:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.579 18:16:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.579 18:16:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.579 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:30:02.579 18:16:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.579 18:16:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:02.579 18:16:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:30:02.579 18:16:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:02.579 18:16:51 -- host/auth.sh@44 -- # digest=sha256 00:30:02.579 18:16:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:02.579 18:16:51 -- host/auth.sh@44 -- # keyid=3 00:30:02.579 18:16:51 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:02.579 18:16:51 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:02.579 18:16:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:02.579 18:16:51 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:02.579 18:16:51 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:30:02.579 18:16:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:02.579 18:16:51 -- host/auth.sh@68 -- # digest=sha256 00:30:02.579 18:16:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:02.579 18:16:51 -- host/auth.sh@68 -- # keyid=3 00:30:02.579 18:16:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:02.579 18:16:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.579 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:30:02.579 18:16:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.579 18:16:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:02.579 18:16:51 -- nvmf/common.sh@717 -- # local ip 00:30:02.579 18:16:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:02.579 18:16:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:02.579 18:16:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.579 18:16:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.579 18:16:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:02.579 18:16:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.579 18:16:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:02.579 18:16:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:02.579 18:16:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:02.579 18:16:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:02.579 18:16:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.579 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:30:02.579 nvme0n1 00:30:02.579 18:16:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.579 18:16:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.579 18:16:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:02.579 18:16:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.579 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:30:02.579 18:16:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.837 18:16:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.837 18:16:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.837 18:16:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.837 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:30:02.837 18:16:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.837 18:16:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:02.837 18:16:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:30:02.837 18:16:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:02.837 18:16:51 -- host/auth.sh@44 -- # digest=sha256 00:30:02.837 18:16:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:02.837 18:16:51 -- host/auth.sh@44 -- # keyid=4 00:30:02.837 18:16:51 -- host/auth.sh@45 -- # key=DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:02.837 18:16:51 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:02.837 18:16:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:02.837 18:16:51 -- host/auth.sh@49 -- # echo DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:02.837 18:16:51 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:30:02.837 18:16:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:02.837 18:16:51 -- host/auth.sh@68 -- # digest=sha256 00:30:02.837 18:16:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:02.837 18:16:51 -- host/auth.sh@68 -- # keyid=4 00:30:02.837 18:16:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:02.837 18:16:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.837 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:30:02.837 18:16:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.837 18:16:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:02.837 18:16:51 -- nvmf/common.sh@717 -- # local ip 00:30:02.837 18:16:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:02.837 18:16:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:02.837 18:16:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.837 18:16:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.837 18:16:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:02.837 18:16:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.837 18:16:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:02.837 18:16:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:02.837 18:16:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:02.837 18:16:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:02.837 18:16:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.837 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:30:02.837 nvme0n1 00:30:02.837 18:16:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.837 18:16:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.837 18:16:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.837 18:16:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:02.837 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:30:02.837 18:16:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.837 18:16:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.837 18:16:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.837 18:16:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.837 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:30:03.095 18:16:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.095 18:16:51 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:03.095 18:16:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:03.095 18:16:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:30:03.095 18:16:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:03.095 18:16:51 -- host/auth.sh@44 -- # digest=sha256 00:30:03.095 18:16:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:03.095 18:16:51 -- host/auth.sh@44 -- # keyid=0 00:30:03.095 18:16:51 -- host/auth.sh@45 -- # key=DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:03.095 18:16:51 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:03.095 18:16:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:03.095 18:16:51 -- host/auth.sh@49 -- # echo DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:03.095 18:16:51 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:30:03.095 18:16:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:03.095 18:16:51 -- host/auth.sh@68 -- # digest=sha256 00:30:03.095 18:16:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:03.095 18:16:51 -- host/auth.sh@68 -- # keyid=0 00:30:03.095 18:16:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:03.095 18:16:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.095 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:30:03.095 18:16:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.095 18:16:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:03.095 18:16:51 -- nvmf/common.sh@717 -- # local ip 00:30:03.095 18:16:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:03.095 18:16:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:03.095 18:16:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.095 18:16:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.095 18:16:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:03.095 18:16:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.095 18:16:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:03.095 18:16:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:03.095 18:16:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:03.095 18:16:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:03.095 18:16:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.095 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:30:03.353 nvme0n1 00:30:03.353 18:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.353 18:16:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.353 18:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.353 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:30:03.353 18:16:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:03.353 18:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.353 18:16:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.354 18:16:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.354 18:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.354 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:30:03.354 18:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.354 18:16:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:03.354 18:16:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:30:03.354 18:16:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:03.354 18:16:52 -- host/auth.sh@44 -- # digest=sha256 00:30:03.354 18:16:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:03.354 18:16:52 -- host/auth.sh@44 -- # keyid=1 00:30:03.354 18:16:52 -- host/auth.sh@45 -- # key=DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:03.354 18:16:52 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:03.354 18:16:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:03.354 18:16:52 -- host/auth.sh@49 -- # echo DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:03.354 18:16:52 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:30:03.354 18:16:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:03.354 18:16:52 -- host/auth.sh@68 -- # digest=sha256 00:30:03.354 18:16:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:03.354 18:16:52 -- host/auth.sh@68 -- # keyid=1 00:30:03.354 18:16:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:03.354 18:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.354 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:30:03.354 18:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.354 18:16:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:03.354 18:16:52 -- nvmf/common.sh@717 -- # local ip 00:30:03.354 18:16:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:03.354 18:16:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:03.354 18:16:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.354 18:16:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.354 18:16:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:03.354 18:16:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.354 18:16:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:03.354 18:16:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:03.354 18:16:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:03.354 18:16:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:03.354 18:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.354 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:30:03.612 nvme0n1 00:30:03.612 18:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.612 18:16:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.612 18:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.612 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:30:03.612 18:16:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:03.612 18:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.612 18:16:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.612 18:16:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.612 18:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.612 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:30:03.612 18:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.612 18:16:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:03.612 18:16:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:30:03.612 18:16:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:03.612 18:16:52 -- host/auth.sh@44 -- # digest=sha256 00:30:03.612 18:16:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:03.612 18:16:52 -- host/auth.sh@44 -- # keyid=2 00:30:03.612 18:16:52 -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:03.612 18:16:52 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:03.612 18:16:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:03.612 18:16:52 -- host/auth.sh@49 -- # echo DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:03.612 18:16:52 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:30:03.612 18:16:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:03.612 18:16:52 -- host/auth.sh@68 -- # digest=sha256 00:30:03.612 18:16:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:03.612 18:16:52 -- host/auth.sh@68 -- # keyid=2 00:30:03.612 18:16:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:03.612 18:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.612 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:30:03.612 18:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.612 18:16:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:03.612 18:16:52 -- nvmf/common.sh@717 -- # local ip 00:30:03.612 18:16:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:03.612 18:16:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:03.612 18:16:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.612 18:16:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.612 18:16:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:03.612 18:16:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.612 18:16:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:03.612 18:16:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:03.612 18:16:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:03.612 18:16:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:03.612 18:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.612 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:30:03.870 nvme0n1 00:30:03.870 18:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.870 18:16:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.870 18:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.870 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:30:03.870 18:16:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:03.870 18:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.128 18:16:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.128 18:16:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.128 18:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.128 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:30:04.128 18:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.129 18:16:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:04.129 18:16:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:30:04.129 18:16:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:04.129 18:16:52 -- host/auth.sh@44 -- # digest=sha256 00:30:04.129 18:16:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:04.129 18:16:52 -- host/auth.sh@44 -- # keyid=3 00:30:04.129 18:16:52 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:04.129 18:16:52 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:04.129 18:16:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:04.129 18:16:52 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:04.129 18:16:52 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:30:04.129 18:16:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:04.129 18:16:52 -- host/auth.sh@68 -- # digest=sha256 00:30:04.129 18:16:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:04.129 18:16:52 -- host/auth.sh@68 -- # keyid=3 00:30:04.129 18:16:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:04.129 18:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.129 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:30:04.129 18:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.129 18:16:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:04.129 18:16:52 -- nvmf/common.sh@717 -- # local ip 00:30:04.129 18:16:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:04.129 18:16:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:04.129 18:16:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.129 18:16:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.129 18:16:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:04.129 18:16:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.129 18:16:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:04.129 18:16:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:04.129 18:16:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:04.129 18:16:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:04.129 18:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.129 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:30:04.387 nvme0n1 00:30:04.388 18:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.388 18:16:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.388 18:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.388 18:16:53 -- common/autotest_common.sh@10 -- # set +x 00:30:04.388 18:16:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:04.388 18:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.388 18:16:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.388 18:16:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.388 18:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.388 18:16:53 -- common/autotest_common.sh@10 -- # set +x 00:30:04.388 18:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.388 18:16:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:04.388 18:16:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:30:04.388 18:16:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:04.388 18:16:53 -- host/auth.sh@44 -- # digest=sha256 00:30:04.388 18:16:53 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:04.388 18:16:53 -- host/auth.sh@44 -- # keyid=4 00:30:04.388 18:16:53 -- host/auth.sh@45 -- # key=DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:04.388 18:16:53 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:04.388 18:16:53 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:04.388 18:16:53 -- host/auth.sh@49 -- # echo DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:04.388 18:16:53 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:30:04.388 18:16:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:04.388 18:16:53 -- host/auth.sh@68 -- # digest=sha256 00:30:04.388 18:16:53 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:04.388 18:16:53 -- host/auth.sh@68 -- # keyid=4 00:30:04.388 18:16:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:04.388 18:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.388 18:16:53 -- common/autotest_common.sh@10 -- # set +x 00:30:04.388 18:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.388 18:16:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:04.388 18:16:53 -- nvmf/common.sh@717 -- # local ip 00:30:04.388 18:16:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:04.388 18:16:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:04.388 18:16:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.388 18:16:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.388 18:16:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:04.388 18:16:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.388 18:16:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:04.388 18:16:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:04.388 18:16:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:04.388 18:16:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:04.388 18:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.388 18:16:53 -- common/autotest_common.sh@10 -- # set +x 00:30:04.647 nvme0n1 00:30:04.647 18:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.647 18:16:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.647 18:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.647 18:16:53 -- common/autotest_common.sh@10 -- # set +x 00:30:04.647 18:16:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:04.647 18:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.906 18:16:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.906 18:16:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.906 18:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.906 18:16:53 -- common/autotest_common.sh@10 -- # set +x 00:30:04.906 18:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.906 18:16:53 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:04.906 18:16:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:04.906 18:16:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:30:04.906 18:16:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:04.906 18:16:53 -- host/auth.sh@44 -- # digest=sha256 00:30:04.906 18:16:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:04.906 18:16:53 -- host/auth.sh@44 -- # keyid=0 00:30:04.906 18:16:53 -- host/auth.sh@45 -- # key=DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:04.906 18:16:53 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:04.906 18:16:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:04.906 18:16:53 -- host/auth.sh@49 -- # echo DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:04.906 18:16:53 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:30:04.906 18:16:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:04.906 18:16:53 -- host/auth.sh@68 -- # digest=sha256 00:30:04.906 18:16:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:04.906 18:16:53 -- host/auth.sh@68 -- # keyid=0 00:30:04.906 18:16:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:04.906 18:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.906 18:16:53 -- common/autotest_common.sh@10 -- # set +x 00:30:04.906 18:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.906 18:16:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:04.906 18:16:53 -- nvmf/common.sh@717 -- # local ip 00:30:04.906 18:16:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:04.906 18:16:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:04.906 18:16:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.906 18:16:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.906 18:16:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:04.906 18:16:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.906 18:16:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:04.906 18:16:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:04.906 18:16:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:04.906 18:16:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:04.906 18:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.906 18:16:53 -- common/autotest_common.sh@10 -- # set +x 00:30:05.501 nvme0n1 00:30:05.501 18:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:05.501 18:16:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.501 18:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:05.501 18:16:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:05.501 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:30:05.501 18:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:05.501 18:16:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.501 18:16:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.501 18:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:05.501 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:30:05.501 18:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:05.501 18:16:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:05.501 18:16:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:30:05.501 18:16:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:05.501 18:16:54 -- host/auth.sh@44 -- # digest=sha256 00:30:05.501 18:16:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:05.501 18:16:54 -- host/auth.sh@44 -- # keyid=1 00:30:05.501 18:16:54 -- host/auth.sh@45 -- # key=DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:05.501 18:16:54 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:05.501 18:16:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:05.501 18:16:54 -- host/auth.sh@49 -- # echo DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:05.501 18:16:54 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:30:05.501 18:16:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:05.501 18:16:54 -- host/auth.sh@68 -- # digest=sha256 00:30:05.501 18:16:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:05.501 18:16:54 -- host/auth.sh@68 -- # keyid=1 00:30:05.501 18:16:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:05.501 18:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:05.501 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:30:05.501 18:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:05.501 18:16:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:05.501 18:16:54 -- nvmf/common.sh@717 -- # local ip 00:30:05.501 18:16:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:05.501 18:16:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:05.501 18:16:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.501 18:16:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.501 18:16:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:05.501 18:16:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.501 18:16:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:05.501 18:16:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:05.501 18:16:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:05.501 18:16:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:05.501 18:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:05.501 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:30:06.067 nvme0n1 00:30:06.067 18:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.067 18:16:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.067 18:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.067 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:30:06.067 18:16:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:06.067 18:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.067 18:16:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.067 18:16:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.067 18:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.067 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:30:06.067 18:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.067 18:16:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:06.067 18:16:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:30:06.067 18:16:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:06.067 18:16:54 -- host/auth.sh@44 -- # digest=sha256 00:30:06.067 18:16:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:06.067 18:16:54 -- host/auth.sh@44 -- # keyid=2 00:30:06.067 18:16:54 -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:06.067 18:16:54 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:06.067 18:16:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:06.067 18:16:54 -- host/auth.sh@49 -- # echo DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:06.067 18:16:54 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:30:06.067 18:16:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:06.067 18:16:54 -- host/auth.sh@68 -- # digest=sha256 00:30:06.067 18:16:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:06.067 18:16:54 -- host/auth.sh@68 -- # keyid=2 00:30:06.067 18:16:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:06.067 18:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.067 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:30:06.067 18:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.067 18:16:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:06.067 18:16:54 -- nvmf/common.sh@717 -- # local ip 00:30:06.067 18:16:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:06.067 18:16:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:06.067 18:16:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.067 18:16:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.067 18:16:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:06.067 18:16:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.067 18:16:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:06.067 18:16:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:06.067 18:16:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:06.067 18:16:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:06.067 18:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.067 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:30:07.000 nvme0n1 00:30:07.000 18:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.000 18:16:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.000 18:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.000 18:16:55 -- common/autotest_common.sh@10 -- # set +x 00:30:07.000 18:16:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:07.000 18:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.000 18:16:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.000 18:16:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.000 18:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.000 18:16:55 -- common/autotest_common.sh@10 -- # set +x 00:30:07.000 18:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.000 18:16:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:07.000 18:16:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:30:07.000 18:16:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:07.000 18:16:55 -- host/auth.sh@44 -- # digest=sha256 00:30:07.000 18:16:55 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:07.000 18:16:55 -- host/auth.sh@44 -- # keyid=3 00:30:07.000 18:16:55 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:07.000 18:16:55 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:07.000 18:16:55 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:07.000 18:16:55 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:07.000 18:16:55 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:30:07.000 18:16:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:07.000 18:16:55 -- host/auth.sh@68 -- # digest=sha256 00:30:07.000 18:16:55 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:07.000 18:16:55 -- host/auth.sh@68 -- # keyid=3 00:30:07.000 18:16:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:07.000 18:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.000 18:16:55 -- common/autotest_common.sh@10 -- # set +x 00:30:07.000 18:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.000 18:16:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:07.000 18:16:55 -- nvmf/common.sh@717 -- # local ip 00:30:07.000 18:16:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:07.000 18:16:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:07.000 18:16:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.000 18:16:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.000 18:16:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:07.000 18:16:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.000 18:16:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:07.000 18:16:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:07.000 18:16:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:07.000 18:16:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:07.000 18:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.000 18:16:55 -- common/autotest_common.sh@10 -- # set +x 00:30:07.566 nvme0n1 00:30:07.566 18:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.566 18:16:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.566 18:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.566 18:16:56 -- common/autotest_common.sh@10 -- # set +x 00:30:07.566 18:16:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:07.566 18:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.566 18:16:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.566 18:16:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.566 18:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.566 18:16:56 -- common/autotest_common.sh@10 -- # set +x 00:30:07.566 18:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.566 18:16:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:07.566 18:16:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:30:07.566 18:16:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:07.566 18:16:56 -- host/auth.sh@44 -- # digest=sha256 00:30:07.566 18:16:56 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:07.566 18:16:56 -- host/auth.sh@44 -- # keyid=4 00:30:07.566 18:16:56 -- host/auth.sh@45 -- # key=DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:07.566 18:16:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:07.566 18:16:56 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:07.566 18:16:56 -- host/auth.sh@49 -- # echo DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:07.566 18:16:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:30:07.566 18:16:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:07.566 18:16:56 -- host/auth.sh@68 -- # digest=sha256 00:30:07.566 18:16:56 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:07.566 18:16:56 -- host/auth.sh@68 -- # keyid=4 00:30:07.566 18:16:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:07.566 18:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.566 18:16:56 -- common/autotest_common.sh@10 -- # set +x 00:30:07.566 18:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.566 18:16:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:07.566 18:16:56 -- nvmf/common.sh@717 -- # local ip 00:30:07.566 18:16:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:07.566 18:16:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:07.566 18:16:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.566 18:16:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.566 18:16:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:07.566 18:16:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.566 18:16:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:07.566 18:16:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:07.566 18:16:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:07.566 18:16:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:07.566 18:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.567 18:16:56 -- common/autotest_common.sh@10 -- # set +x 00:30:08.132 nvme0n1 00:30:08.132 18:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:08.132 18:16:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:08.132 18:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:08.132 18:16:56 -- common/autotest_common.sh@10 -- # set +x 00:30:08.132 18:16:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:08.132 18:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:08.132 18:16:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.132 18:16:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.132 18:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:08.132 18:16:56 -- common/autotest_common.sh@10 -- # set +x 00:30:08.132 18:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:08.132 18:16:56 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:08.132 18:16:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:08.132 18:16:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:30:08.132 18:16:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:08.132 18:16:56 -- host/auth.sh@44 -- # digest=sha256 00:30:08.132 18:16:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:08.132 18:16:56 -- host/auth.sh@44 -- # keyid=0 00:30:08.132 18:16:56 -- host/auth.sh@45 -- # key=DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:08.132 18:16:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:08.132 18:16:56 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:08.132 18:16:56 -- host/auth.sh@49 -- # echo DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:08.132 18:16:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:30:08.132 18:16:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:08.132 18:16:56 -- host/auth.sh@68 -- # digest=sha256 00:30:08.132 18:16:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:08.132 18:16:56 -- host/auth.sh@68 -- # keyid=0 00:30:08.132 18:16:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:08.132 18:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:08.132 18:16:56 -- common/autotest_common.sh@10 -- # set +x 00:30:08.132 18:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:08.132 18:16:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:08.132 18:16:56 -- nvmf/common.sh@717 -- # local ip 00:30:08.132 18:16:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:08.132 18:16:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:08.132 18:16:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:08.132 18:16:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:08.132 18:16:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:08.132 18:16:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:08.132 18:16:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:08.132 18:16:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:08.132 18:16:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:08.132 18:16:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:08.132 18:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:08.132 18:16:56 -- common/autotest_common.sh@10 -- # set +x 00:30:09.506 nvme0n1 00:30:09.507 18:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.507 18:16:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.507 18:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.507 18:16:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:09.507 18:16:58 -- common/autotest_common.sh@10 -- # set +x 00:30:09.507 18:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.507 18:16:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.507 18:16:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.507 18:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.507 18:16:58 -- common/autotest_common.sh@10 -- # set +x 00:30:09.507 18:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.507 18:16:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:09.507 18:16:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:30:09.507 18:16:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:09.507 18:16:58 -- host/auth.sh@44 -- # digest=sha256 00:30:09.507 18:16:58 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:09.507 18:16:58 -- host/auth.sh@44 -- # keyid=1 00:30:09.507 18:16:58 -- host/auth.sh@45 -- # key=DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:09.507 18:16:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:09.507 18:16:58 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:09.507 18:16:58 -- host/auth.sh@49 -- # echo DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:09.507 18:16:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:30:09.507 18:16:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:09.507 18:16:58 -- host/auth.sh@68 -- # digest=sha256 00:30:09.507 18:16:58 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:09.507 18:16:58 -- host/auth.sh@68 -- # keyid=1 00:30:09.507 18:16:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:09.507 18:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.507 18:16:58 -- common/autotest_common.sh@10 -- # set +x 00:30:09.507 18:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.507 18:16:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:09.507 18:16:58 -- nvmf/common.sh@717 -- # local ip 00:30:09.507 18:16:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:09.507 18:16:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:09.507 18:16:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.507 18:16:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.507 18:16:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:09.507 18:16:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.507 18:16:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:09.507 18:16:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:09.507 18:16:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:09.507 18:16:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:09.507 18:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.507 18:16:58 -- common/autotest_common.sh@10 -- # set +x 00:30:10.438 nvme0n1 00:30:10.438 18:16:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:10.438 18:16:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.438 18:16:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:10.438 18:16:59 -- common/autotest_common.sh@10 -- # set +x 00:30:10.438 18:16:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:10.438 18:16:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:10.438 18:16:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.438 18:16:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.438 18:16:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:10.438 18:16:59 -- common/autotest_common.sh@10 -- # set +x 00:30:10.438 18:16:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:10.438 18:16:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:10.438 18:16:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:30:10.438 18:16:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:10.438 18:16:59 -- host/auth.sh@44 -- # digest=sha256 00:30:10.438 18:16:59 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:10.438 18:16:59 -- host/auth.sh@44 -- # keyid=2 00:30:10.438 18:16:59 -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:10.438 18:16:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:10.439 18:16:59 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:10.439 18:16:59 -- host/auth.sh@49 -- # echo DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:10.439 18:16:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:30:10.439 18:16:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:10.439 18:16:59 -- host/auth.sh@68 -- # digest=sha256 00:30:10.439 18:16:59 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:10.439 18:16:59 -- host/auth.sh@68 -- # keyid=2 00:30:10.439 18:16:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:10.439 18:16:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:10.439 18:16:59 -- common/autotest_common.sh@10 -- # set +x 00:30:10.439 18:16:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:10.439 18:16:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:10.439 18:16:59 -- nvmf/common.sh@717 -- # local ip 00:30:10.439 18:16:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:10.439 18:16:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:10.439 18:16:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.439 18:16:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.439 18:16:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:10.439 18:16:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.439 18:16:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:10.439 18:16:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:10.439 18:16:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:10.439 18:16:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:10.439 18:16:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:10.439 18:16:59 -- common/autotest_common.sh@10 -- # set +x 00:30:11.812 nvme0n1 00:30:11.812 18:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.812 18:17:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.812 18:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.812 18:17:00 -- common/autotest_common.sh@10 -- # set +x 00:30:11.812 18:17:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:11.812 18:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.812 18:17:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.812 18:17:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:11.812 18:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.812 18:17:00 -- common/autotest_common.sh@10 -- # set +x 00:30:11.812 18:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.812 18:17:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:11.812 18:17:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:30:11.812 18:17:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:11.812 18:17:00 -- host/auth.sh@44 -- # digest=sha256 00:30:11.812 18:17:00 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:11.812 18:17:00 -- host/auth.sh@44 -- # keyid=3 00:30:11.812 18:17:00 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:11.812 18:17:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:11.812 18:17:00 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:11.812 18:17:00 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:11.812 18:17:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:30:11.812 18:17:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:11.812 18:17:00 -- host/auth.sh@68 -- # digest=sha256 00:30:11.812 18:17:00 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:11.812 18:17:00 -- host/auth.sh@68 -- # keyid=3 00:30:11.812 18:17:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:11.812 18:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.812 18:17:00 -- common/autotest_common.sh@10 -- # set +x 00:30:11.812 18:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.812 18:17:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:11.812 18:17:00 -- nvmf/common.sh@717 -- # local ip 00:30:11.812 18:17:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:11.812 18:17:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:11.812 18:17:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:11.812 18:17:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:11.812 18:17:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:11.812 18:17:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:11.812 18:17:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:11.812 18:17:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:11.812 18:17:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:11.813 18:17:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:11.813 18:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.813 18:17:00 -- common/autotest_common.sh@10 -- # set +x 00:30:12.745 nvme0n1 00:30:12.745 18:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:12.745 18:17:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:12.745 18:17:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:12.745 18:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:12.745 18:17:01 -- common/autotest_common.sh@10 -- # set +x 00:30:12.745 18:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:12.745 18:17:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.745 18:17:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:12.745 18:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:12.745 18:17:01 -- common/autotest_common.sh@10 -- # set +x 00:30:12.745 18:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:12.745 18:17:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:12.745 18:17:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:30:12.745 18:17:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:12.745 18:17:01 -- host/auth.sh@44 -- # digest=sha256 00:30:12.745 18:17:01 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:12.745 18:17:01 -- host/auth.sh@44 -- # keyid=4 00:30:12.745 18:17:01 -- host/auth.sh@45 -- # key=DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:12.745 18:17:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:12.745 18:17:01 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:12.745 18:17:01 -- host/auth.sh@49 -- # echo DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:12.745 18:17:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:30:12.745 18:17:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:12.745 18:17:01 -- host/auth.sh@68 -- # digest=sha256 00:30:12.745 18:17:01 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:12.745 18:17:01 -- host/auth.sh@68 -- # keyid=4 00:30:12.745 18:17:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:12.745 18:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:12.745 18:17:01 -- common/autotest_common.sh@10 -- # set +x 00:30:12.745 18:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:12.745 18:17:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:12.745 18:17:01 -- nvmf/common.sh@717 -- # local ip 00:30:12.745 18:17:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:12.745 18:17:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:12.745 18:17:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:12.745 18:17:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:12.745 18:17:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:12.745 18:17:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:12.745 18:17:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:12.745 18:17:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:12.745 18:17:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:12.745 18:17:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:12.745 18:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:12.745 18:17:01 -- common/autotest_common.sh@10 -- # set +x 00:30:14.119 nvme0n1 00:30:14.119 18:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.119 18:17:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.119 18:17:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:14.119 18:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.119 18:17:02 -- common/autotest_common.sh@10 -- # set +x 00:30:14.119 18:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.119 18:17:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.119 18:17:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.119 18:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.119 18:17:02 -- common/autotest_common.sh@10 -- # set +x 00:30:14.119 18:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.119 18:17:02 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:30:14.119 18:17:02 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:14.119 18:17:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:14.119 18:17:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:30:14.119 18:17:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:14.119 18:17:02 -- host/auth.sh@44 -- # digest=sha384 00:30:14.119 18:17:02 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:14.119 18:17:02 -- host/auth.sh@44 -- # keyid=0 00:30:14.119 18:17:02 -- host/auth.sh@45 -- # key=DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:14.119 18:17:02 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:14.119 18:17:02 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:14.119 18:17:02 -- host/auth.sh@49 -- # echo DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:14.119 18:17:02 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:30:14.119 18:17:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:14.119 18:17:02 -- host/auth.sh@68 -- # digest=sha384 00:30:14.119 18:17:02 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:14.119 18:17:02 -- host/auth.sh@68 -- # keyid=0 00:30:14.119 18:17:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:14.119 18:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.119 18:17:02 -- common/autotest_common.sh@10 -- # set +x 00:30:14.119 18:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.119 18:17:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:14.119 18:17:02 -- nvmf/common.sh@717 -- # local ip 00:30:14.119 18:17:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:14.119 18:17:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:14.119 18:17:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.119 18:17:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.119 18:17:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:14.119 18:17:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.119 18:17:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:14.119 18:17:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:14.119 18:17:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:14.119 18:17:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:14.119 18:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.119 18:17:02 -- common/autotest_common.sh@10 -- # set +x 00:30:14.119 nvme0n1 00:30:14.119 18:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.119 18:17:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.119 18:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.119 18:17:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:14.119 18:17:02 -- common/autotest_common.sh@10 -- # set +x 00:30:14.119 18:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.119 18:17:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.119 18:17:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.119 18:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.119 18:17:02 -- common/autotest_common.sh@10 -- # set +x 00:30:14.119 18:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.119 18:17:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:14.119 18:17:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:30:14.119 18:17:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:14.119 18:17:02 -- host/auth.sh@44 -- # digest=sha384 00:30:14.119 18:17:02 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:14.119 18:17:02 -- host/auth.sh@44 -- # keyid=1 00:30:14.119 18:17:02 -- host/auth.sh@45 -- # key=DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:14.119 18:17:02 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:14.119 18:17:02 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:14.119 18:17:02 -- host/auth.sh@49 -- # echo DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:14.119 18:17:02 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:30:14.119 18:17:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:14.119 18:17:02 -- host/auth.sh@68 -- # digest=sha384 00:30:14.119 18:17:02 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:14.119 18:17:02 -- host/auth.sh@68 -- # keyid=1 00:30:14.119 18:17:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:14.119 18:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.119 18:17:02 -- common/autotest_common.sh@10 -- # set +x 00:30:14.119 18:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.119 18:17:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:14.119 18:17:02 -- nvmf/common.sh@717 -- # local ip 00:30:14.119 18:17:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:14.119 18:17:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:14.119 18:17:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.119 18:17:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.119 18:17:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:14.119 18:17:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.119 18:17:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:14.119 18:17:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:14.119 18:17:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:14.120 18:17:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:14.120 18:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.120 18:17:02 -- common/autotest_common.sh@10 -- # set +x 00:30:14.378 nvme0n1 00:30:14.378 18:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.378 18:17:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.378 18:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.378 18:17:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:14.378 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.378 18:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.378 18:17:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.378 18:17:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.378 18:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.378 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.378 18:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.378 18:17:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:14.378 18:17:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:30:14.378 18:17:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:14.378 18:17:03 -- host/auth.sh@44 -- # digest=sha384 00:30:14.378 18:17:03 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:14.378 18:17:03 -- host/auth.sh@44 -- # keyid=2 00:30:14.378 18:17:03 -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:14.378 18:17:03 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:14.378 18:17:03 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:14.378 18:17:03 -- host/auth.sh@49 -- # echo DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:14.378 18:17:03 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:30:14.378 18:17:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:14.378 18:17:03 -- host/auth.sh@68 -- # digest=sha384 00:30:14.378 18:17:03 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:14.378 18:17:03 -- host/auth.sh@68 -- # keyid=2 00:30:14.379 18:17:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:14.379 18:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.379 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.379 18:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.379 18:17:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:14.379 18:17:03 -- nvmf/common.sh@717 -- # local ip 00:30:14.379 18:17:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:14.379 18:17:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:14.379 18:17:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.379 18:17:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.379 18:17:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:14.379 18:17:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.379 18:17:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:14.379 18:17:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:14.379 18:17:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:14.379 18:17:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:14.379 18:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.379 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.379 nvme0n1 00:30:14.379 18:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.379 18:17:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.379 18:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.379 18:17:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:14.379 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.379 18:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.379 18:17:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.379 18:17:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.379 18:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.379 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.638 18:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.638 18:17:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:14.638 18:17:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:30:14.638 18:17:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:14.638 18:17:03 -- host/auth.sh@44 -- # digest=sha384 00:30:14.638 18:17:03 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:14.638 18:17:03 -- host/auth.sh@44 -- # keyid=3 00:30:14.638 18:17:03 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:14.638 18:17:03 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:14.638 18:17:03 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:14.638 18:17:03 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:14.638 18:17:03 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:30:14.638 18:17:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:14.638 18:17:03 -- host/auth.sh@68 -- # digest=sha384 00:30:14.638 18:17:03 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:14.638 18:17:03 -- host/auth.sh@68 -- # keyid=3 00:30:14.638 18:17:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:14.638 18:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.638 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.638 18:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.638 18:17:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:14.638 18:17:03 -- nvmf/common.sh@717 -- # local ip 00:30:14.638 18:17:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:14.638 18:17:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:14.638 18:17:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.638 18:17:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.638 18:17:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:14.638 18:17:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.638 18:17:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:14.638 18:17:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:14.638 18:17:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:14.638 18:17:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:14.638 18:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.638 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.638 nvme0n1 00:30:14.638 18:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.638 18:17:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.638 18:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.638 18:17:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:14.638 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.639 18:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.639 18:17:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.639 18:17:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.639 18:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.639 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.639 18:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.639 18:17:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:14.639 18:17:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:30:14.639 18:17:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:14.639 18:17:03 -- host/auth.sh@44 -- # digest=sha384 00:30:14.639 18:17:03 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:14.639 18:17:03 -- host/auth.sh@44 -- # keyid=4 00:30:14.639 18:17:03 -- host/auth.sh@45 -- # key=DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:14.639 18:17:03 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:14.639 18:17:03 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:14.639 18:17:03 -- host/auth.sh@49 -- # echo DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:14.639 18:17:03 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:30:14.639 18:17:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:14.639 18:17:03 -- host/auth.sh@68 -- # digest=sha384 00:30:14.639 18:17:03 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:14.639 18:17:03 -- host/auth.sh@68 -- # keyid=4 00:30:14.639 18:17:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:14.639 18:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.639 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.639 18:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.639 18:17:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:14.639 18:17:03 -- nvmf/common.sh@717 -- # local ip 00:30:14.639 18:17:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:14.639 18:17:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:14.639 18:17:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.639 18:17:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.639 18:17:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:14.639 18:17:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.639 18:17:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:14.639 18:17:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:14.639 18:17:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:14.639 18:17:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:14.639 18:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.639 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.901 nvme0n1 00:30:14.901 18:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.901 18:17:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.901 18:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.901 18:17:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:14.901 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.901 18:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.901 18:17:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.901 18:17:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.901 18:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.901 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.901 18:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.901 18:17:03 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:14.901 18:17:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:14.901 18:17:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:30:14.901 18:17:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:14.901 18:17:03 -- host/auth.sh@44 -- # digest=sha384 00:30:14.901 18:17:03 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:14.901 18:17:03 -- host/auth.sh@44 -- # keyid=0 00:30:14.901 18:17:03 -- host/auth.sh@45 -- # key=DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:14.901 18:17:03 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:14.901 18:17:03 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:14.901 18:17:03 -- host/auth.sh@49 -- # echo DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:14.901 18:17:03 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:30:14.901 18:17:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:14.901 18:17:03 -- host/auth.sh@68 -- # digest=sha384 00:30:14.901 18:17:03 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:14.901 18:17:03 -- host/auth.sh@68 -- # keyid=0 00:30:14.901 18:17:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:14.901 18:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.901 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.901 18:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.901 18:17:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:14.901 18:17:03 -- nvmf/common.sh@717 -- # local ip 00:30:14.901 18:17:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:14.901 18:17:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:14.901 18:17:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.901 18:17:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.901 18:17:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:14.901 18:17:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.901 18:17:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:14.901 18:17:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:14.901 18:17:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:14.901 18:17:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:14.901 18:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.901 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:30:15.160 nvme0n1 00:30:15.160 18:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.160 18:17:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:15.160 18:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.160 18:17:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:15.160 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:30:15.160 18:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.160 18:17:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.160 18:17:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:15.160 18:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.160 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:30:15.160 18:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.160 18:17:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:15.160 18:17:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:30:15.160 18:17:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:15.160 18:17:04 -- host/auth.sh@44 -- # digest=sha384 00:30:15.160 18:17:04 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:15.160 18:17:04 -- host/auth.sh@44 -- # keyid=1 00:30:15.160 18:17:04 -- host/auth.sh@45 -- # key=DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:15.160 18:17:04 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:15.160 18:17:04 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:15.160 18:17:04 -- host/auth.sh@49 -- # echo DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:15.160 18:17:04 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:30:15.160 18:17:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:15.160 18:17:04 -- host/auth.sh@68 -- # digest=sha384 00:30:15.160 18:17:04 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:15.160 18:17:04 -- host/auth.sh@68 -- # keyid=1 00:30:15.160 18:17:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:15.160 18:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.160 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:30:15.160 18:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.160 18:17:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:15.160 18:17:04 -- nvmf/common.sh@717 -- # local ip 00:30:15.160 18:17:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:15.160 18:17:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:15.160 18:17:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.160 18:17:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.160 18:17:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:15.160 18:17:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.160 18:17:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:15.160 18:17:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:15.160 18:17:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:15.160 18:17:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:15.160 18:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.160 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:30:15.418 nvme0n1 00:30:15.418 18:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.418 18:17:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:15.418 18:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.418 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:30:15.418 18:17:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:15.418 18:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.418 18:17:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.418 18:17:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:15.418 18:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.418 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:30:15.418 18:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.418 18:17:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:15.418 18:17:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:30:15.418 18:17:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:15.418 18:17:04 -- host/auth.sh@44 -- # digest=sha384 00:30:15.418 18:17:04 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:15.418 18:17:04 -- host/auth.sh@44 -- # keyid=2 00:30:15.418 18:17:04 -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:15.418 18:17:04 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:15.418 18:17:04 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:15.418 18:17:04 -- host/auth.sh@49 -- # echo DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:15.418 18:17:04 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:30:15.418 18:17:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:15.418 18:17:04 -- host/auth.sh@68 -- # digest=sha384 00:30:15.418 18:17:04 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:15.418 18:17:04 -- host/auth.sh@68 -- # keyid=2 00:30:15.418 18:17:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:15.418 18:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.418 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:30:15.418 18:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.418 18:17:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:15.418 18:17:04 -- nvmf/common.sh@717 -- # local ip 00:30:15.418 18:17:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:15.418 18:17:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:15.418 18:17:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.418 18:17:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.418 18:17:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:15.418 18:17:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.418 18:17:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:15.418 18:17:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:15.418 18:17:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:15.418 18:17:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:15.418 18:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.418 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:30:15.676 nvme0n1 00:30:15.676 18:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.676 18:17:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:15.676 18:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.676 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:30:15.677 18:17:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:15.677 18:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.677 18:17:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.677 18:17:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:15.677 18:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.677 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:30:15.677 18:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.677 18:17:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:15.677 18:17:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:30:15.677 18:17:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:15.677 18:17:04 -- host/auth.sh@44 -- # digest=sha384 00:30:15.677 18:17:04 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:15.677 18:17:04 -- host/auth.sh@44 -- # keyid=3 00:30:15.677 18:17:04 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:15.677 18:17:04 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:15.677 18:17:04 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:15.677 18:17:04 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:15.677 18:17:04 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:30:15.677 18:17:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:15.677 18:17:04 -- host/auth.sh@68 -- # digest=sha384 00:30:15.677 18:17:04 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:15.677 18:17:04 -- host/auth.sh@68 -- # keyid=3 00:30:15.677 18:17:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:15.677 18:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.677 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:30:15.677 18:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.677 18:17:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:15.677 18:17:04 -- nvmf/common.sh@717 -- # local ip 00:30:15.677 18:17:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:15.677 18:17:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:15.677 18:17:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.677 18:17:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.677 18:17:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:15.677 18:17:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.677 18:17:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:15.677 18:17:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:15.677 18:17:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:15.677 18:17:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:15.677 18:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.677 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:30:15.935 nvme0n1 00:30:15.935 18:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.935 18:17:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:15.935 18:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.935 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:30:15.935 18:17:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:15.935 18:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.935 18:17:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.935 18:17:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:15.935 18:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.935 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:30:15.935 18:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.935 18:17:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:15.935 18:17:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:30:15.935 18:17:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:15.935 18:17:04 -- host/auth.sh@44 -- # digest=sha384 00:30:15.935 18:17:04 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:15.935 18:17:04 -- host/auth.sh@44 -- # keyid=4 00:30:15.935 18:17:04 -- host/auth.sh@45 -- # key=DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:15.935 18:17:04 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:15.935 18:17:04 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:15.935 18:17:04 -- host/auth.sh@49 -- # echo DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:15.935 18:17:04 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:30:15.935 18:17:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:15.935 18:17:04 -- host/auth.sh@68 -- # digest=sha384 00:30:15.935 18:17:04 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:15.935 18:17:04 -- host/auth.sh@68 -- # keyid=4 00:30:15.935 18:17:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:15.935 18:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.935 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:30:15.935 18:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.935 18:17:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:15.935 18:17:04 -- nvmf/common.sh@717 -- # local ip 00:30:15.935 18:17:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:15.935 18:17:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:15.935 18:17:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.935 18:17:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.935 18:17:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:15.935 18:17:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.935 18:17:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:15.935 18:17:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:15.935 18:17:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:15.935 18:17:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:15.935 18:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.935 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:30:16.192 nvme0n1 00:30:16.192 18:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.192 18:17:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.192 18:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.192 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:30:16.192 18:17:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:16.192 18:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.192 18:17:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.192 18:17:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.192 18:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.192 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:30:16.192 18:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.192 18:17:04 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:16.192 18:17:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:16.192 18:17:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:30:16.192 18:17:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:16.192 18:17:04 -- host/auth.sh@44 -- # digest=sha384 00:30:16.192 18:17:04 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:16.192 18:17:04 -- host/auth.sh@44 -- # keyid=0 00:30:16.192 18:17:04 -- host/auth.sh@45 -- # key=DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:16.192 18:17:04 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:16.192 18:17:04 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:16.192 18:17:04 -- host/auth.sh@49 -- # echo DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:16.192 18:17:04 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:30:16.192 18:17:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:16.192 18:17:04 -- host/auth.sh@68 -- # digest=sha384 00:30:16.192 18:17:04 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:16.192 18:17:04 -- host/auth.sh@68 -- # keyid=0 00:30:16.192 18:17:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:16.192 18:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.192 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:30:16.192 18:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.192 18:17:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:16.192 18:17:04 -- nvmf/common.sh@717 -- # local ip 00:30:16.192 18:17:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:16.192 18:17:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:16.192 18:17:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.192 18:17:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.192 18:17:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:16.192 18:17:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.192 18:17:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:16.192 18:17:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:16.192 18:17:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:16.192 18:17:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:16.192 18:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.192 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:30:16.448 nvme0n1 00:30:16.448 18:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.448 18:17:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.449 18:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.449 18:17:05 -- common/autotest_common.sh@10 -- # set +x 00:30:16.449 18:17:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:16.449 18:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.449 18:17:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.449 18:17:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.449 18:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.449 18:17:05 -- common/autotest_common.sh@10 -- # set +x 00:30:16.449 18:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.449 18:17:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:16.449 18:17:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:30:16.449 18:17:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:16.449 18:17:05 -- host/auth.sh@44 -- # digest=sha384 00:30:16.449 18:17:05 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:16.449 18:17:05 -- host/auth.sh@44 -- # keyid=1 00:30:16.449 18:17:05 -- host/auth.sh@45 -- # key=DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:16.449 18:17:05 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:16.449 18:17:05 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:16.449 18:17:05 -- host/auth.sh@49 -- # echo DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:16.449 18:17:05 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:30:16.449 18:17:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:16.449 18:17:05 -- host/auth.sh@68 -- # digest=sha384 00:30:16.449 18:17:05 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:16.449 18:17:05 -- host/auth.sh@68 -- # keyid=1 00:30:16.449 18:17:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:16.449 18:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.449 18:17:05 -- common/autotest_common.sh@10 -- # set +x 00:30:16.449 18:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.449 18:17:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:16.449 18:17:05 -- nvmf/common.sh@717 -- # local ip 00:30:16.449 18:17:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:16.449 18:17:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:16.449 18:17:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.449 18:17:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.449 18:17:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:16.449 18:17:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.449 18:17:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:16.449 18:17:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:16.449 18:17:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:16.449 18:17:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:16.449 18:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.449 18:17:05 -- common/autotest_common.sh@10 -- # set +x 00:30:16.706 nvme0n1 00:30:16.706 18:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.706 18:17:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.706 18:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.706 18:17:05 -- common/autotest_common.sh@10 -- # set +x 00:30:16.706 18:17:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:16.706 18:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.964 18:17:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.964 18:17:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.964 18:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.964 18:17:05 -- common/autotest_common.sh@10 -- # set +x 00:30:16.964 18:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.964 18:17:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:16.964 18:17:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:30:16.964 18:17:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:16.964 18:17:05 -- host/auth.sh@44 -- # digest=sha384 00:30:16.964 18:17:05 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:16.964 18:17:05 -- host/auth.sh@44 -- # keyid=2 00:30:16.964 18:17:05 -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:16.964 18:17:05 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:16.964 18:17:05 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:16.964 18:17:05 -- host/auth.sh@49 -- # echo DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:16.964 18:17:05 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:30:16.964 18:17:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:16.964 18:17:05 -- host/auth.sh@68 -- # digest=sha384 00:30:16.964 18:17:05 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:16.964 18:17:05 -- host/auth.sh@68 -- # keyid=2 00:30:16.964 18:17:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:16.964 18:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.964 18:17:05 -- common/autotest_common.sh@10 -- # set +x 00:30:16.964 18:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.964 18:17:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:16.964 18:17:05 -- nvmf/common.sh@717 -- # local ip 00:30:16.965 18:17:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:16.965 18:17:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:16.965 18:17:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.965 18:17:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.965 18:17:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:16.965 18:17:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.965 18:17:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:16.965 18:17:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:16.965 18:17:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:16.965 18:17:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:16.965 18:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.965 18:17:05 -- common/autotest_common.sh@10 -- # set +x 00:30:17.223 nvme0n1 00:30:17.223 18:17:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.223 18:17:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.223 18:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.223 18:17:06 -- common/autotest_common.sh@10 -- # set +x 00:30:17.223 18:17:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:17.223 18:17:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.223 18:17:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.223 18:17:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.223 18:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.223 18:17:06 -- common/autotest_common.sh@10 -- # set +x 00:30:17.223 18:17:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.223 18:17:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:17.223 18:17:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:30:17.223 18:17:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:17.223 18:17:06 -- host/auth.sh@44 -- # digest=sha384 00:30:17.223 18:17:06 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:17.223 18:17:06 -- host/auth.sh@44 -- # keyid=3 00:30:17.223 18:17:06 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:17.223 18:17:06 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:17.223 18:17:06 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:17.223 18:17:06 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:17.223 18:17:06 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:30:17.223 18:17:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:17.223 18:17:06 -- host/auth.sh@68 -- # digest=sha384 00:30:17.223 18:17:06 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:17.223 18:17:06 -- host/auth.sh@68 -- # keyid=3 00:30:17.223 18:17:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:17.223 18:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.223 18:17:06 -- common/autotest_common.sh@10 -- # set +x 00:30:17.223 18:17:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.223 18:17:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:17.223 18:17:06 -- nvmf/common.sh@717 -- # local ip 00:30:17.223 18:17:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:17.223 18:17:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:17.223 18:17:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.223 18:17:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.223 18:17:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:17.223 18:17:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.223 18:17:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:17.223 18:17:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:17.223 18:17:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:17.223 18:17:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:17.223 18:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.223 18:17:06 -- common/autotest_common.sh@10 -- # set +x 00:30:17.481 nvme0n1 00:30:17.481 18:17:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.481 18:17:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.481 18:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.481 18:17:06 -- common/autotest_common.sh@10 -- # set +x 00:30:17.481 18:17:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:17.481 18:17:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.739 18:17:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.739 18:17:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.739 18:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.739 18:17:06 -- common/autotest_common.sh@10 -- # set +x 00:30:17.739 18:17:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.739 18:17:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:17.739 18:17:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:30:17.739 18:17:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:17.739 18:17:06 -- host/auth.sh@44 -- # digest=sha384 00:30:17.739 18:17:06 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:17.739 18:17:06 -- host/auth.sh@44 -- # keyid=4 00:30:17.739 18:17:06 -- host/auth.sh@45 -- # key=DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:17.739 18:17:06 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:17.739 18:17:06 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:17.739 18:17:06 -- host/auth.sh@49 -- # echo DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:17.739 18:17:06 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:30:17.739 18:17:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:17.739 18:17:06 -- host/auth.sh@68 -- # digest=sha384 00:30:17.739 18:17:06 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:17.739 18:17:06 -- host/auth.sh@68 -- # keyid=4 00:30:17.739 18:17:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:17.739 18:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.739 18:17:06 -- common/autotest_common.sh@10 -- # set +x 00:30:17.739 18:17:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.739 18:17:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:17.739 18:17:06 -- nvmf/common.sh@717 -- # local ip 00:30:17.739 18:17:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:17.739 18:17:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:17.739 18:17:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.739 18:17:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.739 18:17:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:17.739 18:17:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.739 18:17:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:17.739 18:17:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:17.739 18:17:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:17.739 18:17:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:17.739 18:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.739 18:17:06 -- common/autotest_common.sh@10 -- # set +x 00:30:17.997 nvme0n1 00:30:17.997 18:17:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.997 18:17:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.997 18:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.997 18:17:06 -- common/autotest_common.sh@10 -- # set +x 00:30:17.997 18:17:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:17.997 18:17:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.997 18:17:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.997 18:17:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.997 18:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.997 18:17:06 -- common/autotest_common.sh@10 -- # set +x 00:30:17.997 18:17:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.997 18:17:06 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:17.997 18:17:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:17.997 18:17:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:30:17.997 18:17:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:17.997 18:17:06 -- host/auth.sh@44 -- # digest=sha384 00:30:17.997 18:17:06 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:17.997 18:17:06 -- host/auth.sh@44 -- # keyid=0 00:30:17.997 18:17:06 -- host/auth.sh@45 -- # key=DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:17.997 18:17:06 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:17.997 18:17:06 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:17.997 18:17:06 -- host/auth.sh@49 -- # echo DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:17.997 18:17:06 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:30:17.997 18:17:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:17.997 18:17:06 -- host/auth.sh@68 -- # digest=sha384 00:30:17.997 18:17:06 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:17.997 18:17:06 -- host/auth.sh@68 -- # keyid=0 00:30:17.997 18:17:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:17.997 18:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.997 18:17:06 -- common/autotest_common.sh@10 -- # set +x 00:30:17.997 18:17:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.997 18:17:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:17.997 18:17:06 -- nvmf/common.sh@717 -- # local ip 00:30:17.997 18:17:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:17.997 18:17:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:17.997 18:17:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.997 18:17:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.997 18:17:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:17.997 18:17:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.997 18:17:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:17.997 18:17:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:17.997 18:17:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:17.997 18:17:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:17.997 18:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.255 18:17:06 -- common/autotest_common.sh@10 -- # set +x 00:30:18.821 nvme0n1 00:30:18.821 18:17:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.821 18:17:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.821 18:17:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.821 18:17:07 -- common/autotest_common.sh@10 -- # set +x 00:30:18.821 18:17:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:18.821 18:17:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.821 18:17:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.821 18:17:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:18.821 18:17:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.821 18:17:07 -- common/autotest_common.sh@10 -- # set +x 00:30:18.821 18:17:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.821 18:17:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:18.821 18:17:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:30:18.821 18:17:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:18.821 18:17:07 -- host/auth.sh@44 -- # digest=sha384 00:30:18.821 18:17:07 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:18.821 18:17:07 -- host/auth.sh@44 -- # keyid=1 00:30:18.821 18:17:07 -- host/auth.sh@45 -- # key=DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:18.821 18:17:07 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:18.821 18:17:07 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:18.821 18:17:07 -- host/auth.sh@49 -- # echo DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:18.821 18:17:07 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:30:18.821 18:17:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:18.821 18:17:07 -- host/auth.sh@68 -- # digest=sha384 00:30:18.821 18:17:07 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:18.821 18:17:07 -- host/auth.sh@68 -- # keyid=1 00:30:18.821 18:17:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:18.821 18:17:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.821 18:17:07 -- common/autotest_common.sh@10 -- # set +x 00:30:18.821 18:17:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.821 18:17:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:18.821 18:17:07 -- nvmf/common.sh@717 -- # local ip 00:30:18.821 18:17:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:18.821 18:17:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:18.821 18:17:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.821 18:17:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.821 18:17:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:18.821 18:17:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.821 18:17:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:18.821 18:17:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:18.821 18:17:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:18.821 18:17:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:18.821 18:17:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.821 18:17:07 -- common/autotest_common.sh@10 -- # set +x 00:30:19.387 nvme0n1 00:30:19.387 18:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.387 18:17:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.387 18:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.387 18:17:08 -- common/autotest_common.sh@10 -- # set +x 00:30:19.387 18:17:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:19.387 18:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.387 18:17:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.387 18:17:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.387 18:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.387 18:17:08 -- common/autotest_common.sh@10 -- # set +x 00:30:19.387 18:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.387 18:17:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:19.387 18:17:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:30:19.387 18:17:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:19.387 18:17:08 -- host/auth.sh@44 -- # digest=sha384 00:30:19.387 18:17:08 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:19.387 18:17:08 -- host/auth.sh@44 -- # keyid=2 00:30:19.387 18:17:08 -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:19.387 18:17:08 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:19.387 18:17:08 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:19.387 18:17:08 -- host/auth.sh@49 -- # echo DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:19.387 18:17:08 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:30:19.387 18:17:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:19.387 18:17:08 -- host/auth.sh@68 -- # digest=sha384 00:30:19.387 18:17:08 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:19.387 18:17:08 -- host/auth.sh@68 -- # keyid=2 00:30:19.387 18:17:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:19.387 18:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.387 18:17:08 -- common/autotest_common.sh@10 -- # set +x 00:30:19.387 18:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.387 18:17:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:19.387 18:17:08 -- nvmf/common.sh@717 -- # local ip 00:30:19.387 18:17:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:19.387 18:17:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:19.387 18:17:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.387 18:17:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.387 18:17:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:19.387 18:17:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.387 18:17:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:19.387 18:17:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:19.387 18:17:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:19.387 18:17:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:19.387 18:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.387 18:17:08 -- common/autotest_common.sh@10 -- # set +x 00:30:19.953 nvme0n1 00:30:19.953 18:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.953 18:17:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.953 18:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.953 18:17:08 -- common/autotest_common.sh@10 -- # set +x 00:30:19.953 18:17:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:19.953 18:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.953 18:17:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.953 18:17:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.953 18:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.953 18:17:08 -- common/autotest_common.sh@10 -- # set +x 00:30:19.953 18:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.953 18:17:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:19.953 18:17:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:30:19.954 18:17:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:19.954 18:17:08 -- host/auth.sh@44 -- # digest=sha384 00:30:19.954 18:17:08 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:19.954 18:17:08 -- host/auth.sh@44 -- # keyid=3 00:30:19.954 18:17:08 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:19.954 18:17:08 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:19.954 18:17:08 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:19.954 18:17:08 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:19.954 18:17:08 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:30:19.954 18:17:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:19.954 18:17:08 -- host/auth.sh@68 -- # digest=sha384 00:30:19.954 18:17:08 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:19.954 18:17:08 -- host/auth.sh@68 -- # keyid=3 00:30:19.954 18:17:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:19.954 18:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.954 18:17:08 -- common/autotest_common.sh@10 -- # set +x 00:30:19.954 18:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.954 18:17:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:19.954 18:17:08 -- nvmf/common.sh@717 -- # local ip 00:30:19.954 18:17:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:19.954 18:17:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:19.954 18:17:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.954 18:17:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.954 18:17:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:19.954 18:17:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.954 18:17:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:19.954 18:17:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:19.954 18:17:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:19.954 18:17:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:19.954 18:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.954 18:17:08 -- common/autotest_common.sh@10 -- # set +x 00:30:20.520 nvme0n1 00:30:20.520 18:17:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.520 18:17:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.520 18:17:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:20.520 18:17:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.520 18:17:09 -- common/autotest_common.sh@10 -- # set +x 00:30:20.520 18:17:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.520 18:17:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.520 18:17:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.520 18:17:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.520 18:17:09 -- common/autotest_common.sh@10 -- # set +x 00:30:20.520 18:17:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.520 18:17:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:20.520 18:17:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:30:20.520 18:17:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:20.520 18:17:09 -- host/auth.sh@44 -- # digest=sha384 00:30:20.520 18:17:09 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:20.520 18:17:09 -- host/auth.sh@44 -- # keyid=4 00:30:20.520 18:17:09 -- host/auth.sh@45 -- # key=DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:20.520 18:17:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:20.520 18:17:09 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:20.520 18:17:09 -- host/auth.sh@49 -- # echo DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:20.520 18:17:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:30:20.520 18:17:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:20.520 18:17:09 -- host/auth.sh@68 -- # digest=sha384 00:30:20.520 18:17:09 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:20.520 18:17:09 -- host/auth.sh@68 -- # keyid=4 00:30:20.520 18:17:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:20.520 18:17:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.520 18:17:09 -- common/autotest_common.sh@10 -- # set +x 00:30:20.520 18:17:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.520 18:17:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:20.520 18:17:09 -- nvmf/common.sh@717 -- # local ip 00:30:20.520 18:17:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:20.520 18:17:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:20.520 18:17:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.520 18:17:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.520 18:17:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:20.520 18:17:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.520 18:17:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:20.520 18:17:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:20.520 18:17:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:20.520 18:17:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:20.520 18:17:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.520 18:17:09 -- common/autotest_common.sh@10 -- # set +x 00:30:21.085 nvme0n1 00:30:21.086 18:17:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.086 18:17:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:21.086 18:17:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:21.086 18:17:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.086 18:17:09 -- common/autotest_common.sh@10 -- # set +x 00:30:21.086 18:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.344 18:17:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:21.344 18:17:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:21.344 18:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.344 18:17:10 -- common/autotest_common.sh@10 -- # set +x 00:30:21.344 18:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.344 18:17:10 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:21.344 18:17:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:21.344 18:17:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:30:21.344 18:17:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:21.344 18:17:10 -- host/auth.sh@44 -- # digest=sha384 00:30:21.344 18:17:10 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:21.344 18:17:10 -- host/auth.sh@44 -- # keyid=0 00:30:21.344 18:17:10 -- host/auth.sh@45 -- # key=DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:21.344 18:17:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:21.344 18:17:10 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:21.344 18:17:10 -- host/auth.sh@49 -- # echo DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:21.344 18:17:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:30:21.344 18:17:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:21.344 18:17:10 -- host/auth.sh@68 -- # digest=sha384 00:30:21.344 18:17:10 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:21.344 18:17:10 -- host/auth.sh@68 -- # keyid=0 00:30:21.344 18:17:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:21.344 18:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.344 18:17:10 -- common/autotest_common.sh@10 -- # set +x 00:30:21.344 18:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.344 18:17:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:21.344 18:17:10 -- nvmf/common.sh@717 -- # local ip 00:30:21.344 18:17:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:21.344 18:17:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:21.344 18:17:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:21.344 18:17:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:21.344 18:17:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:21.344 18:17:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:21.344 18:17:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:21.344 18:17:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:21.344 18:17:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:21.344 18:17:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:21.344 18:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.344 18:17:10 -- common/autotest_common.sh@10 -- # set +x 00:30:22.277 nvme0n1 00:30:22.277 18:17:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.277 18:17:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:22.277 18:17:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.277 18:17:11 -- common/autotest_common.sh@10 -- # set +x 00:30:22.277 18:17:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:22.277 18:17:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.535 18:17:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:22.535 18:17:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:22.535 18:17:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.535 18:17:11 -- common/autotest_common.sh@10 -- # set +x 00:30:22.535 18:17:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.535 18:17:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:22.535 18:17:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:30:22.535 18:17:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:22.535 18:17:11 -- host/auth.sh@44 -- # digest=sha384 00:30:22.535 18:17:11 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:22.535 18:17:11 -- host/auth.sh@44 -- # keyid=1 00:30:22.535 18:17:11 -- host/auth.sh@45 -- # key=DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:22.535 18:17:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:22.535 18:17:11 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:22.535 18:17:11 -- host/auth.sh@49 -- # echo DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:22.535 18:17:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:30:22.535 18:17:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:22.535 18:17:11 -- host/auth.sh@68 -- # digest=sha384 00:30:22.535 18:17:11 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:22.535 18:17:11 -- host/auth.sh@68 -- # keyid=1 00:30:22.535 18:17:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:22.535 18:17:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.535 18:17:11 -- common/autotest_common.sh@10 -- # set +x 00:30:22.535 18:17:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.535 18:17:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:22.535 18:17:11 -- nvmf/common.sh@717 -- # local ip 00:30:22.535 18:17:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:22.535 18:17:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:22.535 18:17:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:22.535 18:17:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:22.535 18:17:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:22.535 18:17:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:22.535 18:17:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:22.535 18:17:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:22.535 18:17:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:22.535 18:17:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:22.535 18:17:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.535 18:17:11 -- common/autotest_common.sh@10 -- # set +x 00:30:23.468 nvme0n1 00:30:23.468 18:17:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:23.468 18:17:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:23.468 18:17:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:23.468 18:17:12 -- common/autotest_common.sh@10 -- # set +x 00:30:23.468 18:17:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:23.468 18:17:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:23.468 18:17:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:23.468 18:17:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:23.468 18:17:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:23.468 18:17:12 -- common/autotest_common.sh@10 -- # set +x 00:30:23.468 18:17:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:23.468 18:17:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:23.468 18:17:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:30:23.468 18:17:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:23.468 18:17:12 -- host/auth.sh@44 -- # digest=sha384 00:30:23.468 18:17:12 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:23.468 18:17:12 -- host/auth.sh@44 -- # keyid=2 00:30:23.468 18:17:12 -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:23.468 18:17:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:23.468 18:17:12 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:23.468 18:17:12 -- host/auth.sh@49 -- # echo DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:23.468 18:17:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:30:23.468 18:17:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:23.468 18:17:12 -- host/auth.sh@68 -- # digest=sha384 00:30:23.468 18:17:12 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:23.468 18:17:12 -- host/auth.sh@68 -- # keyid=2 00:30:23.468 18:17:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:23.468 18:17:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:23.468 18:17:12 -- common/autotest_common.sh@10 -- # set +x 00:30:23.468 18:17:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:23.468 18:17:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:23.468 18:17:12 -- nvmf/common.sh@717 -- # local ip 00:30:23.468 18:17:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:23.468 18:17:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:23.468 18:17:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:23.468 18:17:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:23.468 18:17:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:23.468 18:17:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:23.468 18:17:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:23.468 18:17:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:23.468 18:17:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:23.468 18:17:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:23.468 18:17:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:23.468 18:17:12 -- common/autotest_common.sh@10 -- # set +x 00:30:24.846 nvme0n1 00:30:24.846 18:17:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:24.846 18:17:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:24.846 18:17:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:24.846 18:17:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:24.846 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:30:24.846 18:17:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:24.846 18:17:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:24.846 18:17:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:24.846 18:17:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:24.846 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:30:24.846 18:17:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:24.846 18:17:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:24.846 18:17:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:30:24.846 18:17:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:24.846 18:17:13 -- host/auth.sh@44 -- # digest=sha384 00:30:24.846 18:17:13 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:24.846 18:17:13 -- host/auth.sh@44 -- # keyid=3 00:30:24.846 18:17:13 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:24.846 18:17:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:24.846 18:17:13 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:24.846 18:17:13 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:24.846 18:17:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:30:24.846 18:17:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:24.846 18:17:13 -- host/auth.sh@68 -- # digest=sha384 00:30:24.846 18:17:13 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:24.846 18:17:13 -- host/auth.sh@68 -- # keyid=3 00:30:24.846 18:17:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:24.846 18:17:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:24.846 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:30:24.846 18:17:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:24.846 18:17:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:24.846 18:17:13 -- nvmf/common.sh@717 -- # local ip 00:30:24.846 18:17:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:24.846 18:17:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:24.846 18:17:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:24.846 18:17:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:24.846 18:17:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:24.846 18:17:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:24.846 18:17:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:24.846 18:17:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:24.846 18:17:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:24.846 18:17:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:24.846 18:17:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:24.846 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:30:25.781 nvme0n1 00:30:25.781 18:17:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:25.781 18:17:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:25.781 18:17:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:25.781 18:17:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:25.781 18:17:14 -- common/autotest_common.sh@10 -- # set +x 00:30:26.041 18:17:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:26.041 18:17:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:26.041 18:17:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:26.041 18:17:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:26.041 18:17:14 -- common/autotest_common.sh@10 -- # set +x 00:30:26.041 18:17:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:26.041 18:17:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:26.041 18:17:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:30:26.041 18:17:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:26.041 18:17:14 -- host/auth.sh@44 -- # digest=sha384 00:30:26.041 18:17:14 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:26.041 18:17:14 -- host/auth.sh@44 -- # keyid=4 00:30:26.041 18:17:14 -- host/auth.sh@45 -- # key=DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:26.041 18:17:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:26.041 18:17:14 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:26.041 18:17:14 -- host/auth.sh@49 -- # echo DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:26.041 18:17:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:30:26.041 18:17:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:26.041 18:17:14 -- host/auth.sh@68 -- # digest=sha384 00:30:26.041 18:17:14 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:26.041 18:17:14 -- host/auth.sh@68 -- # keyid=4 00:30:26.041 18:17:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:26.041 18:17:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:26.041 18:17:14 -- common/autotest_common.sh@10 -- # set +x 00:30:26.041 18:17:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:26.041 18:17:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:26.041 18:17:14 -- nvmf/common.sh@717 -- # local ip 00:30:26.041 18:17:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:26.041 18:17:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:26.041 18:17:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:26.041 18:17:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:26.041 18:17:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:26.041 18:17:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:26.041 18:17:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:26.041 18:17:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:26.041 18:17:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:26.041 18:17:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:26.041 18:17:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:26.041 18:17:14 -- common/autotest_common.sh@10 -- # set +x 00:30:26.975 nvme0n1 00:30:26.975 18:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:26.975 18:17:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:26.975 18:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:26.975 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:30:26.975 18:17:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:26.975 18:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:26.975 18:17:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:26.975 18:17:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:26.975 18:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:26.975 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:30:26.975 18:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:26.975 18:17:15 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:30:26.975 18:17:15 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:26.975 18:17:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:26.975 18:17:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:30:26.975 18:17:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:26.975 18:17:15 -- host/auth.sh@44 -- # digest=sha512 00:30:26.975 18:17:15 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:26.975 18:17:15 -- host/auth.sh@44 -- # keyid=0 00:30:26.975 18:17:15 -- host/auth.sh@45 -- # key=DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:26.975 18:17:15 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:26.975 18:17:15 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:26.975 18:17:15 -- host/auth.sh@49 -- # echo DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:26.975 18:17:15 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:30:26.975 18:17:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:26.975 18:17:15 -- host/auth.sh@68 -- # digest=sha512 00:30:26.975 18:17:15 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:26.975 18:17:15 -- host/auth.sh@68 -- # keyid=0 00:30:26.975 18:17:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:26.975 18:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:26.975 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:30:26.975 18:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:26.975 18:17:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:26.975 18:17:15 -- nvmf/common.sh@717 -- # local ip 00:30:26.975 18:17:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:26.975 18:17:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:26.975 18:17:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:26.975 18:17:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:26.975 18:17:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:26.975 18:17:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:26.975 18:17:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:26.975 18:17:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:26.975 18:17:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:26.975 18:17:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:26.975 18:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:26.975 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:30:27.233 nvme0n1 00:30:27.233 18:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.233 18:17:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:27.233 18:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.233 18:17:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:27.233 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:30:27.233 18:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.233 18:17:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.233 18:17:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:27.233 18:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.233 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:30:27.233 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.233 18:17:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:27.233 18:17:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:30:27.233 18:17:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:27.233 18:17:16 -- host/auth.sh@44 -- # digest=sha512 00:30:27.233 18:17:16 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:27.233 18:17:16 -- host/auth.sh@44 -- # keyid=1 00:30:27.233 18:17:16 -- host/auth.sh@45 -- # key=DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:27.233 18:17:16 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:27.233 18:17:16 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:27.233 18:17:16 -- host/auth.sh@49 -- # echo DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:27.233 18:17:16 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:30:27.233 18:17:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:27.233 18:17:16 -- host/auth.sh@68 -- # digest=sha512 00:30:27.233 18:17:16 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:27.233 18:17:16 -- host/auth.sh@68 -- # keyid=1 00:30:27.233 18:17:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:27.233 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.233 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:27.233 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.233 18:17:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:27.233 18:17:16 -- nvmf/common.sh@717 -- # local ip 00:30:27.233 18:17:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:27.233 18:17:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:27.233 18:17:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.233 18:17:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.233 18:17:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:27.233 18:17:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.233 18:17:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:27.233 18:17:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:27.233 18:17:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:27.233 18:17:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:27.233 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.233 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:27.233 nvme0n1 00:30:27.233 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.233 18:17:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:27.233 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.233 18:17:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:27.233 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:27.233 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.233 18:17:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.233 18:17:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:27.233 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.233 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:27.492 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.492 18:17:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:27.492 18:17:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:30:27.492 18:17:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:27.492 18:17:16 -- host/auth.sh@44 -- # digest=sha512 00:30:27.492 18:17:16 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:27.492 18:17:16 -- host/auth.sh@44 -- # keyid=2 00:30:27.492 18:17:16 -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:27.492 18:17:16 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:27.492 18:17:16 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:27.492 18:17:16 -- host/auth.sh@49 -- # echo DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:27.492 18:17:16 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:30:27.492 18:17:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:27.492 18:17:16 -- host/auth.sh@68 -- # digest=sha512 00:30:27.492 18:17:16 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:27.492 18:17:16 -- host/auth.sh@68 -- # keyid=2 00:30:27.492 18:17:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:27.492 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.492 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:27.492 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.492 18:17:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:27.492 18:17:16 -- nvmf/common.sh@717 -- # local ip 00:30:27.492 18:17:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:27.492 18:17:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:27.492 18:17:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.492 18:17:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.492 18:17:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:27.492 18:17:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.492 18:17:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:27.492 18:17:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:27.492 18:17:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:27.492 18:17:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:27.492 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.492 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:27.492 nvme0n1 00:30:27.492 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.492 18:17:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:27.492 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.492 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:27.492 18:17:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:27.492 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.492 18:17:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.492 18:17:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:27.492 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.492 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:27.492 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.492 18:17:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:27.492 18:17:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:30:27.492 18:17:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:27.492 18:17:16 -- host/auth.sh@44 -- # digest=sha512 00:30:27.492 18:17:16 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:27.492 18:17:16 -- host/auth.sh@44 -- # keyid=3 00:30:27.492 18:17:16 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:27.492 18:17:16 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:27.492 18:17:16 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:27.492 18:17:16 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:27.493 18:17:16 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:30:27.493 18:17:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:27.493 18:17:16 -- host/auth.sh@68 -- # digest=sha512 00:30:27.493 18:17:16 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:27.493 18:17:16 -- host/auth.sh@68 -- # keyid=3 00:30:27.493 18:17:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:27.493 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.493 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:27.493 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.493 18:17:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:27.493 18:17:16 -- nvmf/common.sh@717 -- # local ip 00:30:27.493 18:17:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:27.493 18:17:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:27.493 18:17:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.493 18:17:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.493 18:17:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:27.493 18:17:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.493 18:17:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:27.493 18:17:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:27.493 18:17:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:27.493 18:17:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:27.493 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.493 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:27.751 nvme0n1 00:30:27.751 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.751 18:17:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:27.751 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.751 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:27.751 18:17:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:27.751 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.751 18:17:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.751 18:17:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:27.751 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.751 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:27.751 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.751 18:17:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:27.751 18:17:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:30:27.752 18:17:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:27.752 18:17:16 -- host/auth.sh@44 -- # digest=sha512 00:30:27.752 18:17:16 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:27.752 18:17:16 -- host/auth.sh@44 -- # keyid=4 00:30:27.752 18:17:16 -- host/auth.sh@45 -- # key=DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:27.752 18:17:16 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:27.752 18:17:16 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:27.752 18:17:16 -- host/auth.sh@49 -- # echo DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:27.752 18:17:16 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:30:27.752 18:17:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:27.752 18:17:16 -- host/auth.sh@68 -- # digest=sha512 00:30:27.752 18:17:16 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:27.752 18:17:16 -- host/auth.sh@68 -- # keyid=4 00:30:27.752 18:17:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:27.752 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.752 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:27.752 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.752 18:17:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:27.752 18:17:16 -- nvmf/common.sh@717 -- # local ip 00:30:27.752 18:17:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:27.752 18:17:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:27.752 18:17:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.752 18:17:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.752 18:17:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:27.752 18:17:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.752 18:17:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:27.752 18:17:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:27.752 18:17:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:27.752 18:17:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:27.752 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.752 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:28.014 nvme0n1 00:30:28.014 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.014 18:17:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.014 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.014 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:28.014 18:17:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:28.014 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.014 18:17:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.014 18:17:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.014 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.014 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:28.014 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.014 18:17:16 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:28.014 18:17:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:28.014 18:17:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:30:28.014 18:17:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:28.014 18:17:16 -- host/auth.sh@44 -- # digest=sha512 00:30:28.014 18:17:16 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:28.014 18:17:16 -- host/auth.sh@44 -- # keyid=0 00:30:28.014 18:17:16 -- host/auth.sh@45 -- # key=DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:28.014 18:17:16 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:28.014 18:17:16 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:28.014 18:17:16 -- host/auth.sh@49 -- # echo DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:28.014 18:17:16 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:30:28.014 18:17:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:28.014 18:17:16 -- host/auth.sh@68 -- # digest=sha512 00:30:28.014 18:17:16 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:28.014 18:17:16 -- host/auth.sh@68 -- # keyid=0 00:30:28.014 18:17:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:28.014 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.014 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:28.014 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.014 18:17:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:28.014 18:17:16 -- nvmf/common.sh@717 -- # local ip 00:30:28.014 18:17:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:28.014 18:17:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:28.014 18:17:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.014 18:17:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.014 18:17:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:28.014 18:17:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.014 18:17:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:28.014 18:17:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:28.014 18:17:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:28.014 18:17:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:28.014 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.014 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:28.313 nvme0n1 00:30:28.313 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.313 18:17:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.313 18:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.313 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:30:28.313 18:17:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:28.313 18:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.313 18:17:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.313 18:17:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.313 18:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.313 18:17:17 -- common/autotest_common.sh@10 -- # set +x 00:30:28.313 18:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.313 18:17:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:28.313 18:17:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:30:28.313 18:17:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:28.313 18:17:17 -- host/auth.sh@44 -- # digest=sha512 00:30:28.313 18:17:17 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:28.313 18:17:17 -- host/auth.sh@44 -- # keyid=1 00:30:28.313 18:17:17 -- host/auth.sh@45 -- # key=DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:28.313 18:17:17 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:28.313 18:17:17 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:28.313 18:17:17 -- host/auth.sh@49 -- # echo DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:28.313 18:17:17 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:30:28.313 18:17:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:28.313 18:17:17 -- host/auth.sh@68 -- # digest=sha512 00:30:28.313 18:17:17 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:28.313 18:17:17 -- host/auth.sh@68 -- # keyid=1 00:30:28.313 18:17:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:28.313 18:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.313 18:17:17 -- common/autotest_common.sh@10 -- # set +x 00:30:28.313 18:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.313 18:17:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:28.313 18:17:17 -- nvmf/common.sh@717 -- # local ip 00:30:28.313 18:17:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:28.313 18:17:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:28.313 18:17:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.313 18:17:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.313 18:17:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:28.313 18:17:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.313 18:17:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:28.313 18:17:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:28.313 18:17:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:28.313 18:17:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:28.313 18:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.313 18:17:17 -- common/autotest_common.sh@10 -- # set +x 00:30:28.573 nvme0n1 00:30:28.573 18:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.573 18:17:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.573 18:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.573 18:17:17 -- common/autotest_common.sh@10 -- # set +x 00:30:28.573 18:17:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:28.573 18:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.573 18:17:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.573 18:17:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.573 18:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.573 18:17:17 -- common/autotest_common.sh@10 -- # set +x 00:30:28.573 18:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.573 18:17:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:28.573 18:17:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:30:28.573 18:17:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:28.573 18:17:17 -- host/auth.sh@44 -- # digest=sha512 00:30:28.573 18:17:17 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:28.573 18:17:17 -- host/auth.sh@44 -- # keyid=2 00:30:28.573 18:17:17 -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:28.573 18:17:17 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:28.573 18:17:17 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:28.573 18:17:17 -- host/auth.sh@49 -- # echo DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:28.573 18:17:17 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:30:28.573 18:17:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:28.573 18:17:17 -- host/auth.sh@68 -- # digest=sha512 00:30:28.573 18:17:17 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:28.573 18:17:17 -- host/auth.sh@68 -- # keyid=2 00:30:28.573 18:17:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:28.573 18:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.573 18:17:17 -- common/autotest_common.sh@10 -- # set +x 00:30:28.573 18:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.573 18:17:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:28.573 18:17:17 -- nvmf/common.sh@717 -- # local ip 00:30:28.573 18:17:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:28.573 18:17:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:28.573 18:17:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.573 18:17:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.573 18:17:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:28.573 18:17:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.573 18:17:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:28.573 18:17:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:28.573 18:17:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:28.573 18:17:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:28.573 18:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.573 18:17:17 -- common/autotest_common.sh@10 -- # set +x 00:30:28.573 nvme0n1 00:30:28.573 18:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.573 18:17:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.573 18:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.573 18:17:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:28.573 18:17:17 -- common/autotest_common.sh@10 -- # set +x 00:30:28.832 18:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.832 18:17:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.832 18:17:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.832 18:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.832 18:17:17 -- common/autotest_common.sh@10 -- # set +x 00:30:28.832 18:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.832 18:17:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:28.832 18:17:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:30:28.832 18:17:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:28.832 18:17:17 -- host/auth.sh@44 -- # digest=sha512 00:30:28.832 18:17:17 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:28.832 18:17:17 -- host/auth.sh@44 -- # keyid=3 00:30:28.832 18:17:17 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:28.832 18:17:17 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:28.832 18:17:17 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:28.832 18:17:17 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:28.832 18:17:17 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:30:28.832 18:17:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:28.832 18:17:17 -- host/auth.sh@68 -- # digest=sha512 00:30:28.832 18:17:17 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:28.832 18:17:17 -- host/auth.sh@68 -- # keyid=3 00:30:28.832 18:17:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:28.832 18:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.832 18:17:17 -- common/autotest_common.sh@10 -- # set +x 00:30:28.832 18:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.832 18:17:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:28.832 18:17:17 -- nvmf/common.sh@717 -- # local ip 00:30:28.832 18:17:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:28.832 18:17:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:28.832 18:17:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.832 18:17:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.832 18:17:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:28.832 18:17:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.832 18:17:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:28.832 18:17:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:28.832 18:17:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:28.832 18:17:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:28.832 18:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.832 18:17:17 -- common/autotest_common.sh@10 -- # set +x 00:30:29.092 nvme0n1 00:30:29.092 18:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.092 18:17:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:29.092 18:17:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.092 18:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.092 18:17:17 -- common/autotest_common.sh@10 -- # set +x 00:30:29.092 18:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.092 18:17:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.092 18:17:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.092 18:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.092 18:17:17 -- common/autotest_common.sh@10 -- # set +x 00:30:29.092 18:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.092 18:17:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:29.092 18:17:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:30:29.092 18:17:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:29.092 18:17:17 -- host/auth.sh@44 -- # digest=sha512 00:30:29.092 18:17:17 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:29.092 18:17:17 -- host/auth.sh@44 -- # keyid=4 00:30:29.092 18:17:17 -- host/auth.sh@45 -- # key=DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:29.092 18:17:17 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:29.092 18:17:17 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:29.092 18:17:17 -- host/auth.sh@49 -- # echo DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:29.092 18:17:17 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:30:29.092 18:17:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:29.092 18:17:17 -- host/auth.sh@68 -- # digest=sha512 00:30:29.092 18:17:17 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:29.092 18:17:17 -- host/auth.sh@68 -- # keyid=4 00:30:29.092 18:17:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:29.092 18:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.092 18:17:17 -- common/autotest_common.sh@10 -- # set +x 00:30:29.092 18:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.092 18:17:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:29.092 18:17:17 -- nvmf/common.sh@717 -- # local ip 00:30:29.092 18:17:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:29.092 18:17:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:29.092 18:17:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.092 18:17:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.092 18:17:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:29.092 18:17:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.092 18:17:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:29.092 18:17:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:29.092 18:17:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:29.092 18:17:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:29.092 18:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.092 18:17:17 -- common/autotest_common.sh@10 -- # set +x 00:30:29.092 nvme0n1 00:30:29.092 18:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.092 18:17:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.092 18:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.092 18:17:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:29.092 18:17:18 -- common/autotest_common.sh@10 -- # set +x 00:30:29.092 18:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.351 18:17:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.351 18:17:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.351 18:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.351 18:17:18 -- common/autotest_common.sh@10 -- # set +x 00:30:29.351 18:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.351 18:17:18 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:29.351 18:17:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:29.351 18:17:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:30:29.351 18:17:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:29.351 18:17:18 -- host/auth.sh@44 -- # digest=sha512 00:30:29.351 18:17:18 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:29.351 18:17:18 -- host/auth.sh@44 -- # keyid=0 00:30:29.351 18:17:18 -- host/auth.sh@45 -- # key=DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:29.351 18:17:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:29.351 18:17:18 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:29.351 18:17:18 -- host/auth.sh@49 -- # echo DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:29.351 18:17:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:30:29.351 18:17:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:29.351 18:17:18 -- host/auth.sh@68 -- # digest=sha512 00:30:29.351 18:17:18 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:29.351 18:17:18 -- host/auth.sh@68 -- # keyid=0 00:30:29.351 18:17:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:29.351 18:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.351 18:17:18 -- common/autotest_common.sh@10 -- # set +x 00:30:29.351 18:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.351 18:17:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:29.351 18:17:18 -- nvmf/common.sh@717 -- # local ip 00:30:29.351 18:17:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:29.351 18:17:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:29.351 18:17:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.351 18:17:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.351 18:17:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:29.351 18:17:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.351 18:17:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:29.351 18:17:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:29.351 18:17:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:29.351 18:17:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:29.351 18:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.351 18:17:18 -- common/autotest_common.sh@10 -- # set +x 00:30:29.609 nvme0n1 00:30:29.609 18:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.609 18:17:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.609 18:17:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:29.609 18:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.609 18:17:18 -- common/autotest_common.sh@10 -- # set +x 00:30:29.609 18:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.609 18:17:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.609 18:17:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.609 18:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.609 18:17:18 -- common/autotest_common.sh@10 -- # set +x 00:30:29.609 18:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.609 18:17:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:29.609 18:17:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:30:29.609 18:17:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:29.609 18:17:18 -- host/auth.sh@44 -- # digest=sha512 00:30:29.609 18:17:18 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:29.609 18:17:18 -- host/auth.sh@44 -- # keyid=1 00:30:29.609 18:17:18 -- host/auth.sh@45 -- # key=DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:29.609 18:17:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:29.609 18:17:18 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:29.609 18:17:18 -- host/auth.sh@49 -- # echo DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:29.609 18:17:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:30:29.609 18:17:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:29.609 18:17:18 -- host/auth.sh@68 -- # digest=sha512 00:30:29.609 18:17:18 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:29.609 18:17:18 -- host/auth.sh@68 -- # keyid=1 00:30:29.609 18:17:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:29.609 18:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.609 18:17:18 -- common/autotest_common.sh@10 -- # set +x 00:30:29.609 18:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.609 18:17:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:29.609 18:17:18 -- nvmf/common.sh@717 -- # local ip 00:30:29.609 18:17:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:29.609 18:17:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:29.609 18:17:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.609 18:17:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.609 18:17:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:29.609 18:17:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.609 18:17:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:29.609 18:17:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:29.609 18:17:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:29.609 18:17:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:29.609 18:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.609 18:17:18 -- common/autotest_common.sh@10 -- # set +x 00:30:29.868 nvme0n1 00:30:29.868 18:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.868 18:17:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.868 18:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.868 18:17:18 -- common/autotest_common.sh@10 -- # set +x 00:30:29.868 18:17:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:29.868 18:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.868 18:17:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.868 18:17:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.868 18:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.868 18:17:18 -- common/autotest_common.sh@10 -- # set +x 00:30:29.868 18:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.868 18:17:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:29.868 18:17:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:30:30.127 18:17:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:30.127 18:17:18 -- host/auth.sh@44 -- # digest=sha512 00:30:30.127 18:17:18 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:30.127 18:17:18 -- host/auth.sh@44 -- # keyid=2 00:30:30.127 18:17:18 -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:30.127 18:17:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:30.127 18:17:18 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:30.127 18:17:18 -- host/auth.sh@49 -- # echo DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:30.127 18:17:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:30:30.127 18:17:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:30.127 18:17:18 -- host/auth.sh@68 -- # digest=sha512 00:30:30.127 18:17:18 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:30.127 18:17:18 -- host/auth.sh@68 -- # keyid=2 00:30:30.127 18:17:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:30.127 18:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.127 18:17:18 -- common/autotest_common.sh@10 -- # set +x 00:30:30.127 18:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.127 18:17:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:30.127 18:17:18 -- nvmf/common.sh@717 -- # local ip 00:30:30.127 18:17:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:30.127 18:17:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:30.127 18:17:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.127 18:17:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.127 18:17:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:30.127 18:17:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:30.127 18:17:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:30.127 18:17:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:30.127 18:17:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:30.127 18:17:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:30.127 18:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.127 18:17:18 -- common/autotest_common.sh@10 -- # set +x 00:30:30.386 nvme0n1 00:30:30.386 18:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.386 18:17:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:30.386 18:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.386 18:17:19 -- common/autotest_common.sh@10 -- # set +x 00:30:30.386 18:17:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:30.386 18:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.386 18:17:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.386 18:17:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:30.386 18:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.386 18:17:19 -- common/autotest_common.sh@10 -- # set +x 00:30:30.386 18:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.386 18:17:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:30.386 18:17:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:30:30.386 18:17:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:30.386 18:17:19 -- host/auth.sh@44 -- # digest=sha512 00:30:30.386 18:17:19 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:30.386 18:17:19 -- host/auth.sh@44 -- # keyid=3 00:30:30.386 18:17:19 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:30.386 18:17:19 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:30.386 18:17:19 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:30.386 18:17:19 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:30.386 18:17:19 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:30:30.386 18:17:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:30.386 18:17:19 -- host/auth.sh@68 -- # digest=sha512 00:30:30.386 18:17:19 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:30.386 18:17:19 -- host/auth.sh@68 -- # keyid=3 00:30:30.386 18:17:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:30.386 18:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.386 18:17:19 -- common/autotest_common.sh@10 -- # set +x 00:30:30.386 18:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.386 18:17:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:30.386 18:17:19 -- nvmf/common.sh@717 -- # local ip 00:30:30.386 18:17:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:30.386 18:17:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:30.386 18:17:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.386 18:17:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.386 18:17:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:30.386 18:17:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:30.386 18:17:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:30.386 18:17:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:30.386 18:17:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:30.386 18:17:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:30.386 18:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.386 18:17:19 -- common/autotest_common.sh@10 -- # set +x 00:30:30.645 nvme0n1 00:30:30.645 18:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.645 18:17:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:30.645 18:17:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:30.645 18:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.645 18:17:19 -- common/autotest_common.sh@10 -- # set +x 00:30:30.645 18:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.904 18:17:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.904 18:17:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:30.904 18:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.904 18:17:19 -- common/autotest_common.sh@10 -- # set +x 00:30:30.904 18:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.904 18:17:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:30.904 18:17:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:30:30.904 18:17:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:30.904 18:17:19 -- host/auth.sh@44 -- # digest=sha512 00:30:30.904 18:17:19 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:30.904 18:17:19 -- host/auth.sh@44 -- # keyid=4 00:30:30.904 18:17:19 -- host/auth.sh@45 -- # key=DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:30.904 18:17:19 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:30.904 18:17:19 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:30.904 18:17:19 -- host/auth.sh@49 -- # echo DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:30.904 18:17:19 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:30:30.904 18:17:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:30.904 18:17:19 -- host/auth.sh@68 -- # digest=sha512 00:30:30.904 18:17:19 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:30.904 18:17:19 -- host/auth.sh@68 -- # keyid=4 00:30:30.904 18:17:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:30.904 18:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.904 18:17:19 -- common/autotest_common.sh@10 -- # set +x 00:30:30.904 18:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.904 18:17:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:30.904 18:17:19 -- nvmf/common.sh@717 -- # local ip 00:30:30.904 18:17:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:30.904 18:17:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:30.904 18:17:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.904 18:17:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.904 18:17:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:30.904 18:17:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:30.904 18:17:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:30.904 18:17:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:30.904 18:17:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:30.904 18:17:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:30.904 18:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.904 18:17:19 -- common/autotest_common.sh@10 -- # set +x 00:30:31.162 nvme0n1 00:30:31.163 18:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.163 18:17:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:31.163 18:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.163 18:17:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:31.163 18:17:19 -- common/autotest_common.sh@10 -- # set +x 00:30:31.163 18:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.163 18:17:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.163 18:17:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:31.163 18:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.163 18:17:19 -- common/autotest_common.sh@10 -- # set +x 00:30:31.163 18:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.163 18:17:19 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:31.163 18:17:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:31.163 18:17:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:30:31.163 18:17:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:31.163 18:17:19 -- host/auth.sh@44 -- # digest=sha512 00:30:31.163 18:17:19 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:31.163 18:17:19 -- host/auth.sh@44 -- # keyid=0 00:30:31.163 18:17:19 -- host/auth.sh@45 -- # key=DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:31.163 18:17:19 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:31.163 18:17:19 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:31.163 18:17:19 -- host/auth.sh@49 -- # echo DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:31.163 18:17:19 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:30:31.163 18:17:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:31.163 18:17:19 -- host/auth.sh@68 -- # digest=sha512 00:30:31.163 18:17:19 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:31.163 18:17:19 -- host/auth.sh@68 -- # keyid=0 00:30:31.163 18:17:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:31.163 18:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.163 18:17:19 -- common/autotest_common.sh@10 -- # set +x 00:30:31.163 18:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.163 18:17:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:31.163 18:17:20 -- nvmf/common.sh@717 -- # local ip 00:30:31.163 18:17:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:31.163 18:17:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:31.163 18:17:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:31.163 18:17:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:31.163 18:17:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:31.163 18:17:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:31.163 18:17:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:31.163 18:17:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:31.163 18:17:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:31.163 18:17:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:31.163 18:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.163 18:17:20 -- common/autotest_common.sh@10 -- # set +x 00:30:31.730 nvme0n1 00:30:31.730 18:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.730 18:17:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:31.730 18:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.730 18:17:20 -- common/autotest_common.sh@10 -- # set +x 00:30:31.730 18:17:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:31.731 18:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.731 18:17:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.731 18:17:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:31.731 18:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.731 18:17:20 -- common/autotest_common.sh@10 -- # set +x 00:30:31.731 18:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.731 18:17:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:31.731 18:17:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:30:31.731 18:17:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:31.990 18:17:20 -- host/auth.sh@44 -- # digest=sha512 00:30:31.991 18:17:20 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:31.991 18:17:20 -- host/auth.sh@44 -- # keyid=1 00:30:31.991 18:17:20 -- host/auth.sh@45 -- # key=DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:31.991 18:17:20 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:31.991 18:17:20 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:31.991 18:17:20 -- host/auth.sh@49 -- # echo DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:31.991 18:17:20 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:30:31.991 18:17:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:31.991 18:17:20 -- host/auth.sh@68 -- # digest=sha512 00:30:31.991 18:17:20 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:31.991 18:17:20 -- host/auth.sh@68 -- # keyid=1 00:30:31.991 18:17:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:31.991 18:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.991 18:17:20 -- common/autotest_common.sh@10 -- # set +x 00:30:31.991 18:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.991 18:17:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:31.991 18:17:20 -- nvmf/common.sh@717 -- # local ip 00:30:31.991 18:17:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:31.991 18:17:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:31.991 18:17:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:31.991 18:17:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:31.991 18:17:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:31.991 18:17:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:31.991 18:17:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:31.991 18:17:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:31.991 18:17:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:31.991 18:17:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:31.991 18:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.991 18:17:20 -- common/autotest_common.sh@10 -- # set +x 00:30:32.561 nvme0n1 00:30:32.561 18:17:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.561 18:17:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:32.561 18:17:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.561 18:17:21 -- common/autotest_common.sh@10 -- # set +x 00:30:32.561 18:17:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:32.561 18:17:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.561 18:17:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:32.561 18:17:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:32.561 18:17:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.561 18:17:21 -- common/autotest_common.sh@10 -- # set +x 00:30:32.561 18:17:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.561 18:17:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:32.561 18:17:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:30:32.561 18:17:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:32.561 18:17:21 -- host/auth.sh@44 -- # digest=sha512 00:30:32.561 18:17:21 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:32.561 18:17:21 -- host/auth.sh@44 -- # keyid=2 00:30:32.561 18:17:21 -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:32.561 18:17:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:32.561 18:17:21 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:32.561 18:17:21 -- host/auth.sh@49 -- # echo DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:32.561 18:17:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:30:32.561 18:17:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:32.561 18:17:21 -- host/auth.sh@68 -- # digest=sha512 00:30:32.561 18:17:21 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:32.561 18:17:21 -- host/auth.sh@68 -- # keyid=2 00:30:32.561 18:17:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:32.561 18:17:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.561 18:17:21 -- common/autotest_common.sh@10 -- # set +x 00:30:32.561 18:17:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.561 18:17:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:32.561 18:17:21 -- nvmf/common.sh@717 -- # local ip 00:30:32.561 18:17:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:32.561 18:17:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:32.561 18:17:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:32.561 18:17:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:32.561 18:17:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:32.561 18:17:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:32.561 18:17:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:32.561 18:17:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:32.561 18:17:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:32.561 18:17:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:32.561 18:17:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.561 18:17:21 -- common/autotest_common.sh@10 -- # set +x 00:30:33.128 nvme0n1 00:30:33.128 18:17:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:33.128 18:17:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:33.128 18:17:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:33.128 18:17:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:33.128 18:17:21 -- common/autotest_common.sh@10 -- # set +x 00:30:33.128 18:17:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:33.128 18:17:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:33.128 18:17:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:33.128 18:17:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:33.128 18:17:21 -- common/autotest_common.sh@10 -- # set +x 00:30:33.128 18:17:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:33.128 18:17:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:33.128 18:17:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:30:33.128 18:17:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:33.128 18:17:21 -- host/auth.sh@44 -- # digest=sha512 00:30:33.128 18:17:21 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:33.128 18:17:21 -- host/auth.sh@44 -- # keyid=3 00:30:33.128 18:17:21 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:33.128 18:17:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:33.128 18:17:21 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:33.128 18:17:21 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:33.128 18:17:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:30:33.128 18:17:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:33.128 18:17:21 -- host/auth.sh@68 -- # digest=sha512 00:30:33.128 18:17:21 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:33.128 18:17:21 -- host/auth.sh@68 -- # keyid=3 00:30:33.128 18:17:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:33.128 18:17:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:33.128 18:17:21 -- common/autotest_common.sh@10 -- # set +x 00:30:33.128 18:17:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:33.128 18:17:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:33.128 18:17:21 -- nvmf/common.sh@717 -- # local ip 00:30:33.128 18:17:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:33.128 18:17:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:33.128 18:17:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:33.128 18:17:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:33.128 18:17:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:33.128 18:17:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:33.128 18:17:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:33.128 18:17:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:33.128 18:17:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:33.128 18:17:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:33.128 18:17:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:33.128 18:17:21 -- common/autotest_common.sh@10 -- # set +x 00:30:33.710 nvme0n1 00:30:33.710 18:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:33.710 18:17:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:33.710 18:17:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:33.710 18:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:33.710 18:17:22 -- common/autotest_common.sh@10 -- # set +x 00:30:33.710 18:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:33.710 18:17:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:33.710 18:17:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:33.710 18:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:33.710 18:17:22 -- common/autotest_common.sh@10 -- # set +x 00:30:33.710 18:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:33.710 18:17:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:33.710 18:17:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:30:33.710 18:17:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:33.710 18:17:22 -- host/auth.sh@44 -- # digest=sha512 00:30:33.710 18:17:22 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:33.710 18:17:22 -- host/auth.sh@44 -- # keyid=4 00:30:33.710 18:17:22 -- host/auth.sh@45 -- # key=DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:33.710 18:17:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:33.710 18:17:22 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:33.711 18:17:22 -- host/auth.sh@49 -- # echo DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:33.711 18:17:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:30:33.711 18:17:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:33.711 18:17:22 -- host/auth.sh@68 -- # digest=sha512 00:30:33.711 18:17:22 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:33.711 18:17:22 -- host/auth.sh@68 -- # keyid=4 00:30:33.711 18:17:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:33.711 18:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:33.711 18:17:22 -- common/autotest_common.sh@10 -- # set +x 00:30:33.711 18:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:33.711 18:17:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:33.711 18:17:22 -- nvmf/common.sh@717 -- # local ip 00:30:33.711 18:17:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:33.711 18:17:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:33.711 18:17:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:33.711 18:17:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:33.711 18:17:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:33.711 18:17:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:33.711 18:17:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:33.711 18:17:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:33.711 18:17:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:33.711 18:17:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:33.711 18:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:33.711 18:17:22 -- common/autotest_common.sh@10 -- # set +x 00:30:34.279 nvme0n1 00:30:34.279 18:17:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:34.279 18:17:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:34.279 18:17:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:34.279 18:17:23 -- common/autotest_common.sh@10 -- # set +x 00:30:34.279 18:17:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:34.279 18:17:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:34.279 18:17:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:34.279 18:17:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:34.279 18:17:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:34.279 18:17:23 -- common/autotest_common.sh@10 -- # set +x 00:30:34.538 18:17:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:34.538 18:17:23 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:34.538 18:17:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:34.538 18:17:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:30:34.538 18:17:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:34.538 18:17:23 -- host/auth.sh@44 -- # digest=sha512 00:30:34.538 18:17:23 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:34.538 18:17:23 -- host/auth.sh@44 -- # keyid=0 00:30:34.538 18:17:23 -- host/auth.sh@45 -- # key=DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:34.538 18:17:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:34.538 18:17:23 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:34.538 18:17:23 -- host/auth.sh@49 -- # echo DHHC-1:00:OWU0MjAzMWRmYWVjZjYyOTI2Njg4ZTk5Mjg2NDlkMWM3inbY: 00:30:34.538 18:17:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:30:34.538 18:17:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:34.538 18:17:23 -- host/auth.sh@68 -- # digest=sha512 00:30:34.538 18:17:23 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:34.538 18:17:23 -- host/auth.sh@68 -- # keyid=0 00:30:34.538 18:17:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:34.538 18:17:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:34.538 18:17:23 -- common/autotest_common.sh@10 -- # set +x 00:30:34.538 18:17:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:34.538 18:17:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:34.538 18:17:23 -- nvmf/common.sh@717 -- # local ip 00:30:34.538 18:17:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:34.539 18:17:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:34.539 18:17:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:34.539 18:17:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:34.539 18:17:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:34.539 18:17:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:34.539 18:17:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:34.539 18:17:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:34.539 18:17:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:34.539 18:17:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:34.539 18:17:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:34.539 18:17:23 -- common/autotest_common.sh@10 -- # set +x 00:30:35.474 nvme0n1 00:30:35.474 18:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:35.474 18:17:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:35.474 18:17:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:35.474 18:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.474 18:17:24 -- common/autotest_common.sh@10 -- # set +x 00:30:35.474 18:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:35.474 18:17:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:35.474 18:17:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:35.474 18:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.474 18:17:24 -- common/autotest_common.sh@10 -- # set +x 00:30:35.474 18:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:35.474 18:17:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:35.474 18:17:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:30:35.474 18:17:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:35.474 18:17:24 -- host/auth.sh@44 -- # digest=sha512 00:30:35.474 18:17:24 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:35.474 18:17:24 -- host/auth.sh@44 -- # keyid=1 00:30:35.474 18:17:24 -- host/auth.sh@45 -- # key=DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:35.474 18:17:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:35.474 18:17:24 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:35.474 18:17:24 -- host/auth.sh@49 -- # echo DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:35.474 18:17:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:30:35.474 18:17:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:35.474 18:17:24 -- host/auth.sh@68 -- # digest=sha512 00:30:35.474 18:17:24 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:35.474 18:17:24 -- host/auth.sh@68 -- # keyid=1 00:30:35.474 18:17:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:35.474 18:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.474 18:17:24 -- common/autotest_common.sh@10 -- # set +x 00:30:35.474 18:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:35.474 18:17:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:35.474 18:17:24 -- nvmf/common.sh@717 -- # local ip 00:30:35.474 18:17:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:35.474 18:17:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:35.474 18:17:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:35.474 18:17:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:35.474 18:17:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:35.474 18:17:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:35.474 18:17:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:35.474 18:17:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:35.474 18:17:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:35.474 18:17:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:35.474 18:17:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.474 18:17:24 -- common/autotest_common.sh@10 -- # set +x 00:30:36.855 nvme0n1 00:30:36.855 18:17:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:36.855 18:17:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:36.855 18:17:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:36.855 18:17:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:36.855 18:17:25 -- common/autotest_common.sh@10 -- # set +x 00:30:36.855 18:17:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:36.855 18:17:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:36.855 18:17:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:36.855 18:17:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:36.855 18:17:25 -- common/autotest_common.sh@10 -- # set +x 00:30:36.855 18:17:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:36.855 18:17:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:36.855 18:17:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:30:36.855 18:17:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:36.855 18:17:25 -- host/auth.sh@44 -- # digest=sha512 00:30:36.855 18:17:25 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:36.855 18:17:25 -- host/auth.sh@44 -- # keyid=2 00:30:36.855 18:17:25 -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:36.855 18:17:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:36.855 18:17:25 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:36.855 18:17:25 -- host/auth.sh@49 -- # echo DHHC-1:01:ZWE2OTU5OWRmMWEyNGZlMzg1NzE5YTM4MTlhNDVlNTmx4RsQ: 00:30:36.855 18:17:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:30:36.855 18:17:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:36.855 18:17:25 -- host/auth.sh@68 -- # digest=sha512 00:30:36.855 18:17:25 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:36.855 18:17:25 -- host/auth.sh@68 -- # keyid=2 00:30:36.855 18:17:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:36.855 18:17:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:36.855 18:17:25 -- common/autotest_common.sh@10 -- # set +x 00:30:36.855 18:17:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:36.855 18:17:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:36.855 18:17:25 -- nvmf/common.sh@717 -- # local ip 00:30:36.855 18:17:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:36.855 18:17:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:36.855 18:17:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:36.855 18:17:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:36.855 18:17:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:36.855 18:17:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:36.855 18:17:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:36.855 18:17:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:36.855 18:17:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:36.855 18:17:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:36.855 18:17:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:36.855 18:17:25 -- common/autotest_common.sh@10 -- # set +x 00:30:37.793 nvme0n1 00:30:37.793 18:17:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:37.793 18:17:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:37.793 18:17:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:37.793 18:17:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:37.793 18:17:26 -- common/autotest_common.sh@10 -- # set +x 00:30:37.793 18:17:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:37.793 18:17:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:37.793 18:17:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:37.793 18:17:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:37.793 18:17:26 -- common/autotest_common.sh@10 -- # set +x 00:30:37.793 18:17:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:37.793 18:17:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:37.793 18:17:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:30:37.793 18:17:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:37.793 18:17:26 -- host/auth.sh@44 -- # digest=sha512 00:30:37.793 18:17:26 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:37.793 18:17:26 -- host/auth.sh@44 -- # keyid=3 00:30:37.793 18:17:26 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:37.793 18:17:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:37.793 18:17:26 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:37.793 18:17:26 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2ExOGFhNWJkYjI4ZWEwMzRkMmZhN2MyNDIzZDU1OWQ5M2MwNTE4MTRiZDFkMWRlDS+lew==: 00:30:37.793 18:17:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:30:37.793 18:17:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:37.793 18:17:26 -- host/auth.sh@68 -- # digest=sha512 00:30:37.793 18:17:26 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:37.793 18:17:26 -- host/auth.sh@68 -- # keyid=3 00:30:37.793 18:17:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:37.793 18:17:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:37.793 18:17:26 -- common/autotest_common.sh@10 -- # set +x 00:30:37.793 18:17:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:37.793 18:17:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:37.793 18:17:26 -- nvmf/common.sh@717 -- # local ip 00:30:37.793 18:17:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:37.793 18:17:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:37.793 18:17:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:37.793 18:17:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:37.793 18:17:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:37.793 18:17:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:37.793 18:17:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:37.793 18:17:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:37.793 18:17:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:37.793 18:17:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:37.793 18:17:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:37.793 18:17:26 -- common/autotest_common.sh@10 -- # set +x 00:30:38.732 nvme0n1 00:30:38.732 18:17:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.732 18:17:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:38.732 18:17:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.732 18:17:27 -- common/autotest_common.sh@10 -- # set +x 00:30:38.732 18:17:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:38.732 18:17:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.732 18:17:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:38.732 18:17:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:38.732 18:17:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.732 18:17:27 -- common/autotest_common.sh@10 -- # set +x 00:30:38.732 18:17:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.732 18:17:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:38.732 18:17:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:30:38.732 18:17:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:38.732 18:17:27 -- host/auth.sh@44 -- # digest=sha512 00:30:38.732 18:17:27 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:38.732 18:17:27 -- host/auth.sh@44 -- # keyid=4 00:30:38.732 18:17:27 -- host/auth.sh@45 -- # key=DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:38.732 18:17:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:38.733 18:17:27 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:38.733 18:17:27 -- host/auth.sh@49 -- # echo DHHC-1:03:MTA5NjU1MzE3MGM3MDZhMTRlNmIzODhmNjNhZTQyNTg4NjM5MWRiMDRjYjgyNTcwZGJhZWY4NTc3OGQyYTVjZERQXrk=: 00:30:38.733 18:17:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:30:38.733 18:17:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:38.733 18:17:27 -- host/auth.sh@68 -- # digest=sha512 00:30:38.733 18:17:27 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:38.733 18:17:27 -- host/auth.sh@68 -- # keyid=4 00:30:38.733 18:17:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:38.733 18:17:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.733 18:17:27 -- common/autotest_common.sh@10 -- # set +x 00:30:38.992 18:17:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.992 18:17:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:38.992 18:17:27 -- nvmf/common.sh@717 -- # local ip 00:30:38.992 18:17:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:38.992 18:17:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:38.992 18:17:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:38.992 18:17:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:38.992 18:17:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:38.992 18:17:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:38.992 18:17:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:38.992 18:17:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:38.992 18:17:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:38.993 18:17:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:38.993 18:17:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.993 18:17:27 -- common/autotest_common.sh@10 -- # set +x 00:30:39.927 nvme0n1 00:30:39.927 18:17:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.927 18:17:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:39.927 18:17:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.927 18:17:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:39.927 18:17:28 -- common/autotest_common.sh@10 -- # set +x 00:30:39.927 18:17:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.927 18:17:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:39.927 18:17:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:39.927 18:17:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.927 18:17:28 -- common/autotest_common.sh@10 -- # set +x 00:30:39.928 18:17:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.928 18:17:28 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:39.928 18:17:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:39.928 18:17:28 -- host/auth.sh@44 -- # digest=sha256 00:30:39.928 18:17:28 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:39.928 18:17:28 -- host/auth.sh@44 -- # keyid=1 00:30:39.928 18:17:28 -- host/auth.sh@45 -- # key=DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:39.928 18:17:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:39.928 18:17:28 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:39.928 18:17:28 -- host/auth.sh@49 -- # echo DHHC-1:00:NzhlYzc5MzZkN2YxNDkwYTU0MjgyYjZhMmVhODg4MTA2ODA1YTZlNzkwNTM0OTBl8OWd+Q==: 00:30:39.928 18:17:28 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:39.928 18:17:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.928 18:17:28 -- common/autotest_common.sh@10 -- # set +x 00:30:39.928 18:17:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.928 18:17:28 -- host/auth.sh@119 -- # get_main_ns_ip 00:30:39.928 18:17:28 -- nvmf/common.sh@717 -- # local ip 00:30:39.928 18:17:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:39.928 18:17:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:39.928 18:17:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:39.928 18:17:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:39.928 18:17:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:39.928 18:17:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:39.928 18:17:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:39.928 18:17:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:39.928 18:17:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:39.928 18:17:28 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:39.928 18:17:28 -- common/autotest_common.sh@638 -- # local es=0 00:30:39.928 18:17:28 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:39.928 18:17:28 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:30:39.928 18:17:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:39.928 18:17:28 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:30:39.928 18:17:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:39.928 18:17:28 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:39.928 18:17:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.928 18:17:28 -- common/autotest_common.sh@10 -- # set +x 00:30:39.928 request: 00:30:39.928 { 00:30:39.928 "name": "nvme0", 00:30:39.928 "trtype": "tcp", 00:30:39.928 "traddr": "10.0.0.1", 00:30:39.928 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:39.928 "adrfam": "ipv4", 00:30:39.928 "trsvcid": "4420", 00:30:39.928 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:39.928 "method": "bdev_nvme_attach_controller", 00:30:39.928 "req_id": 1 00:30:39.928 } 00:30:39.928 Got JSON-RPC error response 00:30:39.928 response: 00:30:39.928 { 00:30:39.928 "code": -32602, 00:30:39.928 "message": "Invalid parameters" 00:30:39.928 } 00:30:39.928 18:17:28 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:30:39.928 18:17:28 -- common/autotest_common.sh@641 -- # es=1 00:30:39.928 18:17:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:39.928 18:17:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:39.928 18:17:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:39.928 18:17:28 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:30:39.928 18:17:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.928 18:17:28 -- common/autotest_common.sh@10 -- # set +x 00:30:39.928 18:17:28 -- host/auth.sh@121 -- # jq length 00:30:39.928 18:17:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.928 18:17:28 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:30:39.928 18:17:28 -- host/auth.sh@124 -- # get_main_ns_ip 00:30:39.928 18:17:28 -- nvmf/common.sh@717 -- # local ip 00:30:39.928 18:17:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:39.928 18:17:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:39.928 18:17:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:39.928 18:17:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:39.928 18:17:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:39.928 18:17:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:39.928 18:17:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:39.928 18:17:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:39.928 18:17:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:39.928 18:17:28 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:39.928 18:17:28 -- common/autotest_common.sh@638 -- # local es=0 00:30:39.928 18:17:28 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:39.928 18:17:28 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:30:39.928 18:17:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:39.928 18:17:28 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:30:39.928 18:17:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:39.928 18:17:28 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:39.928 18:17:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.928 18:17:28 -- common/autotest_common.sh@10 -- # set +x 00:30:39.928 request: 00:30:39.928 { 00:30:39.928 "name": "nvme0", 00:30:39.928 "trtype": "tcp", 00:30:39.928 "traddr": "10.0.0.1", 00:30:39.928 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:39.928 "adrfam": "ipv4", 00:30:39.928 "trsvcid": "4420", 00:30:39.928 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:39.928 "dhchap_key": "key2", 00:30:39.928 "method": "bdev_nvme_attach_controller", 00:30:39.928 "req_id": 1 00:30:39.928 } 00:30:39.928 Got JSON-RPC error response 00:30:39.928 response: 00:30:39.928 { 00:30:39.928 "code": -32602, 00:30:39.928 "message": "Invalid parameters" 00:30:39.928 } 00:30:39.928 18:17:28 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:30:39.928 18:17:28 -- common/autotest_common.sh@641 -- # es=1 00:30:39.928 18:17:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:39.928 18:17:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:39.928 18:17:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:39.928 18:17:28 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:30:39.928 18:17:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.928 18:17:28 -- common/autotest_common.sh@10 -- # set +x 00:30:39.928 18:17:28 -- host/auth.sh@127 -- # jq length 00:30:39.928 18:17:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.928 18:17:28 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:30:39.928 18:17:28 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:30:39.928 18:17:28 -- host/auth.sh@130 -- # cleanup 00:30:39.928 18:17:28 -- host/auth.sh@24 -- # nvmftestfini 00:30:39.928 18:17:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:39.928 18:17:28 -- nvmf/common.sh@117 -- # sync 00:30:39.928 18:17:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:39.928 18:17:28 -- nvmf/common.sh@120 -- # set +e 00:30:39.928 18:17:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:39.928 18:17:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:39.928 rmmod nvme_tcp 00:30:40.188 rmmod nvme_fabrics 00:30:40.188 18:17:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:40.188 18:17:28 -- nvmf/common.sh@124 -- # set -e 00:30:40.188 18:17:28 -- nvmf/common.sh@125 -- # return 0 00:30:40.188 18:17:28 -- nvmf/common.sh@478 -- # '[' -n 3441607 ']' 00:30:40.188 18:17:28 -- nvmf/common.sh@479 -- # killprocess 3441607 00:30:40.188 18:17:28 -- common/autotest_common.sh@936 -- # '[' -z 3441607 ']' 00:30:40.188 18:17:28 -- common/autotest_common.sh@940 -- # kill -0 3441607 00:30:40.188 18:17:28 -- common/autotest_common.sh@941 -- # uname 00:30:40.188 18:17:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:40.188 18:17:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3441607 00:30:40.188 18:17:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:40.188 18:17:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:40.188 18:17:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3441607' 00:30:40.188 killing process with pid 3441607 00:30:40.188 18:17:28 -- common/autotest_common.sh@955 -- # kill 3441607 00:30:40.188 18:17:28 -- common/autotest_common.sh@960 -- # wait 3441607 00:30:40.188 18:17:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:40.188 18:17:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:40.188 18:17:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:40.188 18:17:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:40.188 18:17:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:40.188 18:17:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.188 18:17:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:40.188 18:17:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.733 18:17:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:42.733 18:17:31 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:42.733 18:17:31 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:42.733 18:17:31 -- host/auth.sh@27 -- # clean_kernel_target 00:30:42.733 18:17:31 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:30:42.733 18:17:31 -- nvmf/common.sh@675 -- # echo 0 00:30:42.733 18:17:31 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:42.733 18:17:31 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:42.733 18:17:31 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:42.733 18:17:31 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:42.733 18:17:31 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:30:42.733 18:17:31 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:30:42.733 18:17:31 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:43.702 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:43.702 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:43.702 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:43.702 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:43.702 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:43.702 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:43.702 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:43.702 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:43.702 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:43.702 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:43.702 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:43.702 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:43.702 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:43.702 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:43.702 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:43.702 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:44.641 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:30:44.900 18:17:33 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.kJW /tmp/spdk.key-null.A2y /tmp/spdk.key-sha256.VbO /tmp/spdk.key-sha384.vMY /tmp/spdk.key-sha512.v6v /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:30:44.900 18:17:33 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:46.276 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:46.276 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:46.276 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:46.276 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:46.276 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:46.276 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:46.276 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:46.276 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:46.276 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:46.276 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:46.276 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:46.276 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:46.276 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:46.276 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:46.276 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:46.276 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:46.276 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:46.276 00:30:46.276 real 0m52.499s 00:30:46.276 user 0m50.553s 00:30:46.276 sys 0m6.674s 00:30:46.276 18:17:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:46.276 18:17:35 -- common/autotest_common.sh@10 -- # set +x 00:30:46.276 ************************************ 00:30:46.276 END TEST nvmf_auth 00:30:46.276 ************************************ 00:30:46.276 18:17:35 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:30:46.276 18:17:35 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:46.276 18:17:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:46.276 18:17:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:46.276 18:17:35 -- common/autotest_common.sh@10 -- # set +x 00:30:46.535 ************************************ 00:30:46.535 START TEST nvmf_digest 00:30:46.535 ************************************ 00:30:46.535 18:17:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:46.535 * Looking for test storage... 00:30:46.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:46.535 18:17:35 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:46.535 18:17:35 -- nvmf/common.sh@7 -- # uname -s 00:30:46.535 18:17:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:46.535 18:17:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:46.535 18:17:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:46.535 18:17:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:46.535 18:17:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:46.535 18:17:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:46.535 18:17:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:46.535 18:17:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:46.535 18:17:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:46.535 18:17:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:46.535 18:17:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:46.535 18:17:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:46.535 18:17:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:46.535 18:17:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:46.535 18:17:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:46.535 18:17:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:46.535 18:17:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:46.535 18:17:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:46.535 18:17:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:46.535 18:17:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:46.535 18:17:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.535 18:17:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.536 18:17:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.536 18:17:35 -- paths/export.sh@5 -- # export PATH 00:30:46.536 18:17:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.536 18:17:35 -- nvmf/common.sh@47 -- # : 0 00:30:46.536 18:17:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:46.536 18:17:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:46.536 18:17:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:46.536 18:17:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:46.536 18:17:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:46.536 18:17:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:46.536 18:17:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:46.536 18:17:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:46.536 18:17:35 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:46.536 18:17:35 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:46.536 18:17:35 -- host/digest.sh@16 -- # runtime=2 00:30:46.536 18:17:35 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:46.536 18:17:35 -- host/digest.sh@138 -- # nvmftestinit 00:30:46.536 18:17:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:46.536 18:17:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:46.536 18:17:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:46.536 18:17:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:46.536 18:17:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:46.536 18:17:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.536 18:17:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:46.536 18:17:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.536 18:17:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:30:46.536 18:17:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:46.536 18:17:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:46.536 18:17:35 -- common/autotest_common.sh@10 -- # set +x 00:30:49.072 18:17:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:49.072 18:17:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:49.072 18:17:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:49.072 18:17:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:49.072 18:17:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:49.072 18:17:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:49.072 18:17:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:49.072 18:17:37 -- nvmf/common.sh@295 -- # net_devs=() 00:30:49.072 18:17:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:49.072 18:17:37 -- nvmf/common.sh@296 -- # e810=() 00:30:49.072 18:17:37 -- nvmf/common.sh@296 -- # local -ga e810 00:30:49.072 18:17:37 -- nvmf/common.sh@297 -- # x722=() 00:30:49.072 18:17:37 -- nvmf/common.sh@297 -- # local -ga x722 00:30:49.072 18:17:37 -- nvmf/common.sh@298 -- # mlx=() 00:30:49.072 18:17:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:49.072 18:17:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.072 18:17:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.072 18:17:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.072 18:17:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.072 18:17:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.072 18:17:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.072 18:17:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.072 18:17:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.072 18:17:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.072 18:17:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.072 18:17:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.072 18:17:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:49.072 18:17:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:49.072 18:17:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:49.072 18:17:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:49.072 18:17:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:49.072 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:49.072 18:17:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:49.072 18:17:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:49.072 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:49.072 18:17:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:49.072 18:17:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:49.072 18:17:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.072 18:17:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:49.072 18:17:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.072 18:17:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:49.072 Found net devices under 0000:84:00.0: cvl_0_0 00:30:49.072 18:17:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.072 18:17:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:49.072 18:17:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.072 18:17:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:49.072 18:17:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.072 18:17:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:49.072 Found net devices under 0000:84:00.1: cvl_0_1 00:30:49.072 18:17:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.072 18:17:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:49.072 18:17:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:49.072 18:17:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:49.072 18:17:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:49.072 18:17:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:49.072 18:17:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:49.072 18:17:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:49.072 18:17:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:49.072 18:17:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:49.072 18:17:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:49.072 18:17:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:49.072 18:17:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:49.072 18:17:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:49.072 18:17:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:49.072 18:17:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:49.072 18:17:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:49.072 18:17:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:49.072 18:17:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:49.072 18:17:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:49.072 18:17:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.072 18:17:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.072 18:17:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.072 18:17:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:49.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:30:49.072 00:30:49.072 --- 10.0.0.2 ping statistics --- 00:30:49.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.072 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:30:49.072 18:17:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:30:49.072 00:30:49.072 --- 10.0.0.1 ping statistics --- 00:30:49.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.072 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:30:49.072 18:17:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.072 18:17:37 -- nvmf/common.sh@411 -- # return 0 00:30:49.072 18:17:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:49.072 18:17:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:49.072 18:17:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:49.072 18:17:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:49.072 18:17:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:49.072 18:17:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:49.072 18:17:37 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:49.072 18:17:37 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:49.072 18:17:37 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:49.072 18:17:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:49.072 18:17:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:49.072 18:17:37 -- common/autotest_common.sh@10 -- # set +x 00:30:49.072 ************************************ 00:30:49.072 START TEST nvmf_digest_clean 00:30:49.072 ************************************ 00:30:49.332 18:17:38 -- common/autotest_common.sh@1111 -- # run_digest 00:30:49.333 18:17:38 -- host/digest.sh@120 -- # local dsa_initiator 00:30:49.333 18:17:38 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:49.333 18:17:38 -- host/digest.sh@121 -- # dsa_initiator=false 00:30:49.333 18:17:38 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:49.333 18:17:38 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:49.333 18:17:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:49.333 18:17:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:49.333 18:17:38 -- common/autotest_common.sh@10 -- # set +x 00:30:49.333 18:17:38 -- nvmf/common.sh@470 -- # nvmfpid=3451328 00:30:49.333 18:17:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:49.333 18:17:38 -- nvmf/common.sh@471 -- # waitforlisten 3451328 00:30:49.333 18:17:38 -- common/autotest_common.sh@817 -- # '[' -z 3451328 ']' 00:30:49.333 18:17:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.333 18:17:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:49.333 18:17:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.333 18:17:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:49.333 18:17:38 -- common/autotest_common.sh@10 -- # set +x 00:30:49.333 [2024-04-15 18:17:38.078923] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:30:49.333 [2024-04-15 18:17:38.079018] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.333 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.333 [2024-04-15 18:17:38.157401] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.333 [2024-04-15 18:17:38.252014] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.333 [2024-04-15 18:17:38.252087] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.333 [2024-04-15 18:17:38.252112] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:49.333 [2024-04-15 18:17:38.252127] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:49.333 [2024-04-15 18:17:38.252140] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.333 [2024-04-15 18:17:38.252171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.593 18:17:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:49.593 18:17:38 -- common/autotest_common.sh@850 -- # return 0 00:30:49.593 18:17:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:49.593 18:17:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:49.593 18:17:38 -- common/autotest_common.sh@10 -- # set +x 00:30:49.593 18:17:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.593 18:17:38 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:49.593 18:17:38 -- host/digest.sh@126 -- # common_target_config 00:30:49.593 18:17:38 -- host/digest.sh@43 -- # rpc_cmd 00:30:49.593 18:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:49.593 18:17:38 -- common/autotest_common.sh@10 -- # set +x 00:30:49.593 null0 00:30:49.593 [2024-04-15 18:17:38.457360] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.593 [2024-04-15 18:17:38.481590] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.593 18:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:49.593 18:17:38 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:49.593 18:17:38 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:49.593 18:17:38 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:49.593 18:17:38 -- host/digest.sh@80 -- # rw=randread 00:30:49.593 18:17:38 -- host/digest.sh@80 -- # bs=4096 00:30:49.593 18:17:38 -- host/digest.sh@80 -- # qd=128 00:30:49.593 18:17:38 -- host/digest.sh@80 -- # scan_dsa=false 00:30:49.593 18:17:38 -- host/digest.sh@83 -- # bperfpid=3451469 00:30:49.593 18:17:38 -- host/digest.sh@84 -- # waitforlisten 3451469 /var/tmp/bperf.sock 00:30:49.593 18:17:38 -- common/autotest_common.sh@817 -- # '[' -z 3451469 ']' 00:30:49.593 18:17:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:49.593 18:17:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:49.593 18:17:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:49.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:49.593 18:17:38 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:49.593 18:17:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:49.593 18:17:38 -- common/autotest_common.sh@10 -- # set +x 00:30:49.853 [2024-04-15 18:17:38.575985] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:30:49.853 [2024-04-15 18:17:38.576192] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451469 ] 00:30:49.853 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.853 [2024-04-15 18:17:38.692762] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.853 [2024-04-15 18:17:38.789835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.232 18:17:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:51.232 18:17:39 -- common/autotest_common.sh@850 -- # return 0 00:30:51.232 18:17:39 -- host/digest.sh@86 -- # false 00:30:51.232 18:17:39 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:51.232 18:17:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:51.232 18:17:40 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:51.232 18:17:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:51.802 nvme0n1 00:30:51.802 18:17:40 -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:51.802 18:17:40 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:52.061 Running I/O for 2 seconds... 00:30:53.973 00:30:53.973 Latency(us) 00:30:53.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.973 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:53.973 nvme0n1 : 2.00 17229.12 67.30 0.00 0.00 7420.33 3155.44 21456.97 00:30:53.973 =================================================================================================================== 00:30:53.973 Total : 17229.12 67.30 0.00 0.00 7420.33 3155.44 21456.97 00:30:53.973 0 00:30:53.973 18:17:42 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:53.973 18:17:42 -- host/digest.sh@93 -- # get_accel_stats 00:30:53.973 18:17:42 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:53.973 18:17:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:53.973 18:17:42 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:53.973 | select(.opcode=="crc32c") 00:30:53.973 | "\(.module_name) \(.executed)"' 00:30:54.543 18:17:43 -- host/digest.sh@94 -- # false 00:30:54.543 18:17:43 -- host/digest.sh@94 -- # exp_module=software 00:30:54.543 18:17:43 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:54.543 18:17:43 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:54.543 18:17:43 -- host/digest.sh@98 -- # killprocess 3451469 00:30:54.543 18:17:43 -- common/autotest_common.sh@936 -- # '[' -z 3451469 ']' 00:30:54.543 18:17:43 -- common/autotest_common.sh@940 -- # kill -0 3451469 00:30:54.543 18:17:43 -- common/autotest_common.sh@941 -- # uname 00:30:54.543 18:17:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:54.543 18:17:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3451469 00:30:54.543 18:17:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:54.543 18:17:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:54.543 18:17:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3451469' 00:30:54.543 killing process with pid 3451469 00:30:54.543 18:17:43 -- common/autotest_common.sh@955 -- # kill 3451469 00:30:54.543 Received shutdown signal, test time was about 2.000000 seconds 00:30:54.543 00:30:54.543 Latency(us) 00:30:54.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.543 =================================================================================================================== 00:30:54.543 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:54.543 18:17:43 -- common/autotest_common.sh@960 -- # wait 3451469 00:30:54.543 18:17:43 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:54.544 18:17:43 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:54.544 18:17:43 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:54.544 18:17:43 -- host/digest.sh@80 -- # rw=randread 00:30:54.544 18:17:43 -- host/digest.sh@80 -- # bs=131072 00:30:54.544 18:17:43 -- host/digest.sh@80 -- # qd=16 00:30:54.544 18:17:43 -- host/digest.sh@80 -- # scan_dsa=false 00:30:54.544 18:17:43 -- host/digest.sh@83 -- # bperfpid=3452007 00:30:54.544 18:17:43 -- host/digest.sh@84 -- # waitforlisten 3452007 /var/tmp/bperf.sock 00:30:54.544 18:17:43 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:54.544 18:17:43 -- common/autotest_common.sh@817 -- # '[' -z 3452007 ']' 00:30:54.544 18:17:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:54.544 18:17:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:54.544 18:17:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:54.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:54.544 18:17:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:54.544 18:17:43 -- common/autotest_common.sh@10 -- # set +x 00:30:54.804 [2024-04-15 18:17:43.534937] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:30:54.804 [2024-04-15 18:17:43.535027] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452007 ] 00:30:54.804 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:54.804 Zero copy mechanism will not be used. 00:30:54.804 EAL: No free 2048 kB hugepages reported on node 1 00:30:54.804 [2024-04-15 18:17:43.603872] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.804 [2024-04-15 18:17:43.699407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.062 18:17:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:55.062 18:17:43 -- common/autotest_common.sh@850 -- # return 0 00:30:55.062 18:17:43 -- host/digest.sh@86 -- # false 00:30:55.062 18:17:43 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:55.062 18:17:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:55.321 18:17:44 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:55.321 18:17:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:55.887 nvme0n1 00:30:55.887 18:17:44 -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:55.887 18:17:44 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:55.887 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:55.887 Zero copy mechanism will not be used. 00:30:55.887 Running I/O for 2 seconds... 00:30:58.417 00:30:58.417 Latency(us) 00:30:58.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:58.417 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:58.417 nvme0n1 : 2.00 3239.04 404.88 0.00 0.00 4935.87 4708.88 8009.96 00:30:58.417 =================================================================================================================== 00:30:58.417 Total : 3239.04 404.88 0.00 0.00 4935.87 4708.88 8009.96 00:30:58.417 0 00:30:58.417 18:17:46 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:58.417 18:17:46 -- host/digest.sh@93 -- # get_accel_stats 00:30:58.417 18:17:46 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:58.417 18:17:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:58.417 18:17:46 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:58.417 | select(.opcode=="crc32c") 00:30:58.417 | "\(.module_name) \(.executed)"' 00:30:58.417 18:17:47 -- host/digest.sh@94 -- # false 00:30:58.417 18:17:47 -- host/digest.sh@94 -- # exp_module=software 00:30:58.417 18:17:47 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:58.417 18:17:47 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:58.417 18:17:47 -- host/digest.sh@98 -- # killprocess 3452007 00:30:58.417 18:17:47 -- common/autotest_common.sh@936 -- # '[' -z 3452007 ']' 00:30:58.417 18:17:47 -- common/autotest_common.sh@940 -- # kill -0 3452007 00:30:58.417 18:17:47 -- common/autotest_common.sh@941 -- # uname 00:30:58.417 18:17:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:58.417 18:17:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3452007 00:30:58.676 18:17:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:58.676 18:17:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:58.676 18:17:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3452007' 00:30:58.676 killing process with pid 3452007 00:30:58.676 18:17:47 -- common/autotest_common.sh@955 -- # kill 3452007 00:30:58.676 Received shutdown signal, test time was about 2.000000 seconds 00:30:58.676 00:30:58.676 Latency(us) 00:30:58.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:58.676 =================================================================================================================== 00:30:58.676 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:58.677 18:17:47 -- common/autotest_common.sh@960 -- # wait 3452007 00:30:58.677 18:17:47 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:58.677 18:17:47 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:58.677 18:17:47 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:58.677 18:17:47 -- host/digest.sh@80 -- # rw=randwrite 00:30:58.677 18:17:47 -- host/digest.sh@80 -- # bs=4096 00:30:58.677 18:17:47 -- host/digest.sh@80 -- # qd=128 00:30:58.677 18:17:47 -- host/digest.sh@80 -- # scan_dsa=false 00:30:58.677 18:17:47 -- host/digest.sh@83 -- # bperfpid=3452539 00:30:58.677 18:17:47 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:58.677 18:17:47 -- host/digest.sh@84 -- # waitforlisten 3452539 /var/tmp/bperf.sock 00:30:58.677 18:17:47 -- common/autotest_common.sh@817 -- # '[' -z 3452539 ']' 00:30:58.677 18:17:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:58.677 18:17:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:58.677 18:17:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:58.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:58.677 18:17:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:58.677 18:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:58.937 [2024-04-15 18:17:47.670462] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:30:58.937 [2024-04-15 18:17:47.670557] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452539 ] 00:30:58.937 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.937 [2024-04-15 18:17:47.739615] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.937 [2024-04-15 18:17:47.834519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.539 18:17:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:59.539 18:17:48 -- common/autotest_common.sh@850 -- # return 0 00:30:59.539 18:17:48 -- host/digest.sh@86 -- # false 00:30:59.539 18:17:48 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:59.539 18:17:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:59.796 18:17:48 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:59.796 18:17:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:00.365 nvme0n1 00:31:00.365 18:17:49 -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:00.365 18:17:49 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:00.365 Running I/O for 2 seconds... 00:31:02.902 00:31:02.902 Latency(us) 00:31:02.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.902 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:02.902 nvme0n1 : 2.00 19649.87 76.76 0.00 0.00 6502.77 3422.44 15534.46 00:31:02.902 =================================================================================================================== 00:31:02.902 Total : 19649.87 76.76 0.00 0.00 6502.77 3422.44 15534.46 00:31:02.902 0 00:31:02.902 18:17:51 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:02.902 18:17:51 -- host/digest.sh@93 -- # get_accel_stats 00:31:02.902 18:17:51 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:02.902 18:17:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:02.902 18:17:51 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:02.902 | select(.opcode=="crc32c") 00:31:02.902 | "\(.module_name) \(.executed)"' 00:31:02.902 18:17:51 -- host/digest.sh@94 -- # false 00:31:02.902 18:17:51 -- host/digest.sh@94 -- # exp_module=software 00:31:02.902 18:17:51 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:02.902 18:17:51 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:02.902 18:17:51 -- host/digest.sh@98 -- # killprocess 3452539 00:31:02.902 18:17:51 -- common/autotest_common.sh@936 -- # '[' -z 3452539 ']' 00:31:02.902 18:17:51 -- common/autotest_common.sh@940 -- # kill -0 3452539 00:31:02.902 18:17:51 -- common/autotest_common.sh@941 -- # uname 00:31:02.903 18:17:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:02.903 18:17:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3452539 00:31:02.903 18:17:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:02.903 18:17:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:02.903 18:17:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3452539' 00:31:02.903 killing process with pid 3452539 00:31:02.903 18:17:51 -- common/autotest_common.sh@955 -- # kill 3452539 00:31:02.903 Received shutdown signal, test time was about 2.000000 seconds 00:31:02.903 00:31:02.903 Latency(us) 00:31:02.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.903 =================================================================================================================== 00:31:02.903 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:02.903 18:17:51 -- common/autotest_common.sh@960 -- # wait 3452539 00:31:03.162 18:17:51 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:31:03.162 18:17:51 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:03.162 18:17:51 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:03.162 18:17:51 -- host/digest.sh@80 -- # rw=randwrite 00:31:03.162 18:17:51 -- host/digest.sh@80 -- # bs=131072 00:31:03.162 18:17:51 -- host/digest.sh@80 -- # qd=16 00:31:03.162 18:17:51 -- host/digest.sh@80 -- # scan_dsa=false 00:31:03.162 18:17:51 -- host/digest.sh@83 -- # bperfpid=3452951 00:31:03.162 18:17:51 -- host/digest.sh@84 -- # waitforlisten 3452951 /var/tmp/bperf.sock 00:31:03.162 18:17:51 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:03.162 18:17:51 -- common/autotest_common.sh@817 -- # '[' -z 3452951 ']' 00:31:03.162 18:17:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:03.162 18:17:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:03.162 18:17:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:03.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:03.162 18:17:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:03.162 18:17:51 -- common/autotest_common.sh@10 -- # set +x 00:31:03.162 [2024-04-15 18:17:51.975380] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:31:03.162 [2024-04-15 18:17:51.975480] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452951 ] 00:31:03.162 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:03.162 Zero copy mechanism will not be used. 00:31:03.162 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.162 [2024-04-15 18:17:52.044892] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.420 [2024-04-15 18:17:52.141902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.420 18:17:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:03.420 18:17:52 -- common/autotest_common.sh@850 -- # return 0 00:31:03.420 18:17:52 -- host/digest.sh@86 -- # false 00:31:03.420 18:17:52 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:03.420 18:17:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:03.989 18:17:52 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:03.989 18:17:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:04.557 nvme0n1 00:31:04.557 18:17:53 -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:04.557 18:17:53 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:04.557 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:04.557 Zero copy mechanism will not be used. 00:31:04.557 Running I/O for 2 seconds... 00:31:07.091 00:31:07.091 Latency(us) 00:31:07.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.091 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:07.091 nvme0n1 : 2.00 3636.74 454.59 0.00 0.00 4389.18 3252.53 8009.96 00:31:07.091 =================================================================================================================== 00:31:07.091 Total : 3636.74 454.59 0.00 0.00 4389.18 3252.53 8009.96 00:31:07.091 0 00:31:07.091 18:17:55 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:07.091 18:17:55 -- host/digest.sh@93 -- # get_accel_stats 00:31:07.091 18:17:55 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:07.091 18:17:55 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:07.092 | select(.opcode=="crc32c") 00:31:07.092 | "\(.module_name) \(.executed)"' 00:31:07.092 18:17:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:07.092 18:17:55 -- host/digest.sh@94 -- # false 00:31:07.092 18:17:55 -- host/digest.sh@94 -- # exp_module=software 00:31:07.092 18:17:55 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:07.092 18:17:55 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:07.092 18:17:55 -- host/digest.sh@98 -- # killprocess 3452951 00:31:07.092 18:17:55 -- common/autotest_common.sh@936 -- # '[' -z 3452951 ']' 00:31:07.092 18:17:55 -- common/autotest_common.sh@940 -- # kill -0 3452951 00:31:07.092 18:17:55 -- common/autotest_common.sh@941 -- # uname 00:31:07.092 18:17:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:07.092 18:17:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3452951 00:31:07.092 18:17:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:07.092 18:17:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:07.092 18:17:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3452951' 00:31:07.092 killing process with pid 3452951 00:31:07.092 18:17:55 -- common/autotest_common.sh@955 -- # kill 3452951 00:31:07.092 Received shutdown signal, test time was about 2.000000 seconds 00:31:07.092 00:31:07.092 Latency(us) 00:31:07.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.092 =================================================================================================================== 00:31:07.092 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:07.092 18:17:55 -- common/autotest_common.sh@960 -- # wait 3452951 00:31:07.352 18:17:56 -- host/digest.sh@132 -- # killprocess 3451328 00:31:07.352 18:17:56 -- common/autotest_common.sh@936 -- # '[' -z 3451328 ']' 00:31:07.352 18:17:56 -- common/autotest_common.sh@940 -- # kill -0 3451328 00:31:07.352 18:17:56 -- common/autotest_common.sh@941 -- # uname 00:31:07.352 18:17:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:07.352 18:17:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3451328 00:31:07.352 18:17:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:07.352 18:17:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:07.352 18:17:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3451328' 00:31:07.352 killing process with pid 3451328 00:31:07.352 18:17:56 -- common/autotest_common.sh@955 -- # kill 3451328 00:31:07.352 18:17:56 -- common/autotest_common.sh@960 -- # wait 3451328 00:31:07.611 00:31:07.611 real 0m18.479s 00:31:07.611 user 0m38.300s 00:31:07.611 sys 0m5.076s 00:31:07.611 18:17:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:07.611 18:17:56 -- common/autotest_common.sh@10 -- # set +x 00:31:07.611 ************************************ 00:31:07.611 END TEST nvmf_digest_clean 00:31:07.611 ************************************ 00:31:07.611 18:17:56 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:31:07.611 18:17:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:07.611 18:17:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:07.611 18:17:56 -- common/autotest_common.sh@10 -- # set +x 00:31:07.870 ************************************ 00:31:07.870 START TEST nvmf_digest_error 00:31:07.870 ************************************ 00:31:07.870 18:17:56 -- common/autotest_common.sh@1111 -- # run_digest_error 00:31:07.870 18:17:56 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:31:07.870 18:17:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:07.870 18:17:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:07.870 18:17:56 -- common/autotest_common.sh@10 -- # set +x 00:31:07.871 18:17:56 -- nvmf/common.sh@470 -- # nvmfpid=3453520 00:31:07.871 18:17:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:07.871 18:17:56 -- nvmf/common.sh@471 -- # waitforlisten 3453520 00:31:07.871 18:17:56 -- common/autotest_common.sh@817 -- # '[' -z 3453520 ']' 00:31:07.871 18:17:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.871 18:17:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:07.871 18:17:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.871 18:17:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:07.871 18:17:56 -- common/autotest_common.sh@10 -- # set +x 00:31:07.871 [2024-04-15 18:17:56.711992] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:31:07.871 [2024-04-15 18:17:56.712172] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.871 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.871 [2024-04-15 18:17:56.820134] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.129 [2024-04-15 18:17:56.916082] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.129 [2024-04-15 18:17:56.916150] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.129 [2024-04-15 18:17:56.916166] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.129 [2024-04-15 18:17:56.916181] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.129 [2024-04-15 18:17:56.916193] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.129 [2024-04-15 18:17:56.916225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.129 18:17:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:08.129 18:17:56 -- common/autotest_common.sh@850 -- # return 0 00:31:08.129 18:17:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:08.129 18:17:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:08.129 18:17:56 -- common/autotest_common.sh@10 -- # set +x 00:31:08.129 18:17:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.129 18:17:57 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:08.129 18:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.129 18:17:57 -- common/autotest_common.sh@10 -- # set +x 00:31:08.129 [2024-04-15 18:17:57.016904] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:08.129 18:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.129 18:17:57 -- host/digest.sh@105 -- # common_target_config 00:31:08.129 18:17:57 -- host/digest.sh@43 -- # rpc_cmd 00:31:08.129 18:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.129 18:17:57 -- common/autotest_common.sh@10 -- # set +x 00:31:08.389 null0 00:31:08.389 [2024-04-15 18:17:57.136724] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.389 [2024-04-15 18:17:57.160950] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.389 18:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.389 18:17:57 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:31:08.389 18:17:57 -- host/digest.sh@54 -- # local rw bs qd 00:31:08.389 18:17:57 -- host/digest.sh@56 -- # rw=randread 00:31:08.389 18:17:57 -- host/digest.sh@56 -- # bs=4096 00:31:08.389 18:17:57 -- host/digest.sh@56 -- # qd=128 00:31:08.389 18:17:57 -- host/digest.sh@58 -- # bperfpid=3453660 00:31:08.389 18:17:57 -- host/digest.sh@60 -- # waitforlisten 3453660 /var/tmp/bperf.sock 00:31:08.389 18:17:57 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:08.389 18:17:57 -- common/autotest_common.sh@817 -- # '[' -z 3453660 ']' 00:31:08.389 18:17:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:08.389 18:17:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:08.389 18:17:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:08.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:08.389 18:17:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:08.389 18:17:57 -- common/autotest_common.sh@10 -- # set +x 00:31:08.389 [2024-04-15 18:17:57.209237] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:31:08.389 [2024-04-15 18:17:57.209319] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3453660 ] 00:31:08.389 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.389 [2024-04-15 18:17:57.277537] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.648 [2024-04-15 18:17:57.372893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.906 18:17:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:08.906 18:17:57 -- common/autotest_common.sh@850 -- # return 0 00:31:08.906 18:17:57 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:08.907 18:17:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:09.165 18:17:58 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:09.165 18:17:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.165 18:17:58 -- common/autotest_common.sh@10 -- # set +x 00:31:09.165 18:17:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.165 18:17:58 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:09.165 18:17:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:09.737 nvme0n1 00:31:09.737 18:17:58 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:09.737 18:17:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.737 18:17:58 -- common/autotest_common.sh@10 -- # set +x 00:31:09.737 18:17:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.737 18:17:58 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:09.737 18:17:58 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:10.006 Running I/O for 2 seconds... 00:31:10.006 [2024-04-15 18:17:58.832745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.006 [2024-04-15 18:17:58.832799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.006 [2024-04-15 18:17:58.832822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.006 [2024-04-15 18:17:58.849865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.006 [2024-04-15 18:17:58.849902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.006 [2024-04-15 18:17:58.849922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.006 [2024-04-15 18:17:58.865271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.006 [2024-04-15 18:17:58.865307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.006 [2024-04-15 18:17:58.865328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.006 [2024-04-15 18:17:58.880097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.006 [2024-04-15 18:17:58.880131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.006 [2024-04-15 18:17:58.880151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.006 [2024-04-15 18:17:58.893892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.006 [2024-04-15 18:17:58.893926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.006 [2024-04-15 18:17:58.893945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.006 [2024-04-15 18:17:58.906442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.006 [2024-04-15 18:17:58.906477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.006 [2024-04-15 18:17:58.906496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.006 [2024-04-15 18:17:58.920538] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.006 [2024-04-15 18:17:58.920572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.006 [2024-04-15 18:17:58.920591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.006 [2024-04-15 18:17:58.936024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.006 [2024-04-15 18:17:58.936068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.006 [2024-04-15 18:17:58.936099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.006 [2024-04-15 18:17:58.948198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.006 [2024-04-15 18:17:58.948232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.006 [2024-04-15 18:17:58.948252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.265 [2024-04-15 18:17:58.962264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:58.962301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:58.962321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.266 [2024-04-15 18:17:58.976842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:58.976877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:58.976896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.266 [2024-04-15 18:17:58.989063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:58.989106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:58.989125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.266 [2024-04-15 18:17:59.005450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:59.005485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:59.005504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.266 [2024-04-15 18:17:59.020005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:59.020039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:59.020073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.266 [2024-04-15 18:17:59.033154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:59.033188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:59.033208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.266 [2024-04-15 18:17:59.047808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:59.047854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:59.047873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.266 [2024-04-15 18:17:59.059947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:59.059987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:59.060008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.266 [2024-04-15 18:17:59.076951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:59.076986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:59.077005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.266 [2024-04-15 18:17:59.089889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:59.089923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:59.089942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.266 [2024-04-15 18:17:59.102798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:59.102833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:59.102852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.266 [2024-04-15 18:17:59.117920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:59.117954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:59.117973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.266 [2024-04-15 18:17:59.130243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:59.130277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:59.130297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.266 [2024-04-15 18:17:59.144971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:59.145005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:59.145024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.266 [2024-04-15 18:17:59.158835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:59.158870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:59.158889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.266 [2024-04-15 18:17:59.173204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:59.173249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:59.173269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.266 [2024-04-15 18:17:59.187096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:59.187130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:59.187150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.266 [2024-04-15 18:17:59.201078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:59.201111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:59.201131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.266 [2024-04-15 18:17:59.215402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.266 [2024-04-15 18:17:59.215444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.266 [2024-04-15 18:17:59.215464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.525 [2024-04-15 18:17:59.229945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.525 [2024-04-15 18:17:59.229981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.525 [2024-04-15 18:17:59.230001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.525 [2024-04-15 18:17:59.243526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.525 [2024-04-15 18:17:59.243561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.525 [2024-04-15 18:17:59.243581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.526 [2024-04-15 18:17:59.257421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.526 [2024-04-15 18:17:59.257455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.526 [2024-04-15 18:17:59.257476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.526 [2024-04-15 18:17:59.269666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.526 [2024-04-15 18:17:59.269710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.526 [2024-04-15 18:17:59.269730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.526 [2024-04-15 18:17:59.285351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.526 [2024-04-15 18:17:59.285386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.526 [2024-04-15 18:17:59.285405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.526 [2024-04-15 18:17:59.301078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.526 [2024-04-15 18:17:59.301113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.526 [2024-04-15 18:17:59.301140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.526 [2024-04-15 18:17:59.315978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.526 [2024-04-15 18:17:59.316012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.526 [2024-04-15 18:17:59.316031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.526 [2024-04-15 18:17:59.329690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.526 [2024-04-15 18:17:59.329729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.526 [2024-04-15 18:17:59.329748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.526 [2024-04-15 18:17:59.342205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.526 [2024-04-15 18:17:59.342240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.526 [2024-04-15 18:17:59.342260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.526 [2024-04-15 18:17:59.357863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.526 [2024-04-15 18:17:59.357896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.526 [2024-04-15 18:17:59.357916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.526 [2024-04-15 18:17:59.372335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.526 [2024-04-15 18:17:59.372369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.526 [2024-04-15 18:17:59.372389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.526 [2024-04-15 18:17:59.384997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.526 [2024-04-15 18:17:59.385031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.526 [2024-04-15 18:17:59.385050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.526 [2024-04-15 18:17:59.401016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.526 [2024-04-15 18:17:59.401050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.526 [2024-04-15 18:17:59.401078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.526 [2024-04-15 18:17:59.415024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.526 [2024-04-15 18:17:59.415064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.526 [2024-04-15 18:17:59.415087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.526 [2024-04-15 18:17:59.428676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.526 [2024-04-15 18:17:59.428715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.526 [2024-04-15 18:17:59.428735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.526 [2024-04-15 18:17:59.440936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.526 [2024-04-15 18:17:59.440970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.526 [2024-04-15 18:17:59.440989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.526 [2024-04-15 18:17:59.456315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.526 [2024-04-15 18:17:59.456348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.526 [2024-04-15 18:17:59.456367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.526 [2024-04-15 18:17:59.471327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.526 [2024-04-15 18:17:59.471363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.526 [2024-04-15 18:17:59.471383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.483789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.483837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.483858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.499510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.499546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.499566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.513212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.513246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.513266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.526692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.526726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.526745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.540643] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.540678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.540704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.554317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.554352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.554371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.569770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.569805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.569825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.584408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.584442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.584461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.599000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.599034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.599053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.611469] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.611502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.611523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.626332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.626366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.626386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.638309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.638343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.638364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.654886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.654921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.654941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.670727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.670768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.670789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.682835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.682870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.682889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.696116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.696149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.696168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.711883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.711917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.711936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.725420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.725454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.725473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:10.786 [2024-04-15 18:17:59.738150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:10.786 [2024-04-15 18:17:59.738185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.786 [2024-04-15 18:17:59.738205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.046 [2024-04-15 18:17:59.753132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.046 [2024-04-15 18:17:59.753167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.046 [2024-04-15 18:17:59.753187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.046 [2024-04-15 18:17:59.766593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.046 [2024-04-15 18:17:59.766627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.046 [2024-04-15 18:17:59.766647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.046 [2024-04-15 18:17:59.781638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.046 [2024-04-15 18:17:59.781672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.046 [2024-04-15 18:17:59.781691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.046 [2024-04-15 18:17:59.796028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.046 [2024-04-15 18:17:59.796070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.046 [2024-04-15 18:17:59.796092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.046 [2024-04-15 18:17:59.809622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.046 [2024-04-15 18:17:59.809656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.046 [2024-04-15 18:17:59.809676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.046 [2024-04-15 18:17:59.825224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.046 [2024-04-15 18:17:59.825259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.046 [2024-04-15 18:17:59.825278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.046 [2024-04-15 18:17:59.839952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.046 [2024-04-15 18:17:59.839986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.046 [2024-04-15 18:17:59.840006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.046 [2024-04-15 18:17:59.852686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.046 [2024-04-15 18:17:59.852718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.046 [2024-04-15 18:17:59.852738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.046 [2024-04-15 18:17:59.870086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.046 [2024-04-15 18:17:59.870120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.046 [2024-04-15 18:17:59.870139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.046 [2024-04-15 18:17:59.882251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.046 [2024-04-15 18:17:59.882284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.046 [2024-04-15 18:17:59.882303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.046 [2024-04-15 18:17:59.896446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.046 [2024-04-15 18:17:59.896479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.046 [2024-04-15 18:17:59.896498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.046 [2024-04-15 18:17:59.911914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.046 [2024-04-15 18:17:59.911949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.046 [2024-04-15 18:17:59.911975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.046 [2024-04-15 18:17:59.925880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.046 [2024-04-15 18:17:59.925919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.046 [2024-04-15 18:17:59.925939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.046 [2024-04-15 18:17:59.938732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.046 [2024-04-15 18:17:59.938766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.046 [2024-04-15 18:17:59.938785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.046 [2024-04-15 18:17:59.954433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.046 [2024-04-15 18:17:59.954466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.046 [2024-04-15 18:17:59.954485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.046 [2024-04-15 18:17:59.966403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.046 [2024-04-15 18:17:59.966436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.046 [2024-04-15 18:17:59.966455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.046 [2024-04-15 18:17:59.981312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.046 [2024-04-15 18:17:59.981346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.046 [2024-04-15 18:17:59.981367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.046 [2024-04-15 18:17:59.995708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.046 [2024-04-15 18:17:59.995743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.046 [2024-04-15 18:17:59.995763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.305 [2024-04-15 18:18:00.010518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.305 [2024-04-15 18:18:00.010561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.305 [2024-04-15 18:18:00.010582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.305 [2024-04-15 18:18:00.024176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.305 [2024-04-15 18:18:00.024227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.305 [2024-04-15 18:18:00.024251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.305 [2024-04-15 18:18:00.040662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.305 [2024-04-15 18:18:00.040712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.305 [2024-04-15 18:18:00.040734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.305 [2024-04-15 18:18:00.056716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.305 [2024-04-15 18:18:00.056755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.305 [2024-04-15 18:18:00.056775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.305 [2024-04-15 18:18:00.070049] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.306 [2024-04-15 18:18:00.070097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.306 [2024-04-15 18:18:00.070118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.306 [2024-04-15 18:18:00.085952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.306 [2024-04-15 18:18:00.085987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.306 [2024-04-15 18:18:00.086007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.306 [2024-04-15 18:18:00.098643] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.306 [2024-04-15 18:18:00.098678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.306 [2024-04-15 18:18:00.098698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.306 [2024-04-15 18:18:00.114790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.306 [2024-04-15 18:18:00.114825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.306 [2024-04-15 18:18:00.114845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.306 [2024-04-15 18:18:00.129767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.306 [2024-04-15 18:18:00.129802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.306 [2024-04-15 18:18:00.129822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.306 [2024-04-15 18:18:00.142102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.306 [2024-04-15 18:18:00.142137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.306 [2024-04-15 18:18:00.142156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.306 [2024-04-15 18:18:00.156429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.306 [2024-04-15 18:18:00.156465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.306 [2024-04-15 18:18:00.156494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.306 [2024-04-15 18:18:00.171423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.306 [2024-04-15 18:18:00.171458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.306 [2024-04-15 18:18:00.171478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.306 [2024-04-15 18:18:00.186340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.306 [2024-04-15 18:18:00.186374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.306 [2024-04-15 18:18:00.186394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.306 [2024-04-15 18:18:00.198640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.306 [2024-04-15 18:18:00.198676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.306 [2024-04-15 18:18:00.198695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.306 [2024-04-15 18:18:00.214549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.306 [2024-04-15 18:18:00.214583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.306 [2024-04-15 18:18:00.214603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.306 [2024-04-15 18:18:00.227817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.306 [2024-04-15 18:18:00.227851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.306 [2024-04-15 18:18:00.227871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.306 [2024-04-15 18:18:00.243847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.306 [2024-04-15 18:18:00.243882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.306 [2024-04-15 18:18:00.243901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.306 [2024-04-15 18:18:00.257603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.306 [2024-04-15 18:18:00.257639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.306 [2024-04-15 18:18:00.257660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.566 [2024-04-15 18:18:00.271587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.566 [2024-04-15 18:18:00.271622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.566 [2024-04-15 18:18:00.271643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.566 [2024-04-15 18:18:00.285181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.566 [2024-04-15 18:18:00.285223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.566 [2024-04-15 18:18:00.285244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.566 [2024-04-15 18:18:00.303379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.566 [2024-04-15 18:18:00.303415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.566 [2024-04-15 18:18:00.303435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.566 [2024-04-15 18:18:00.315533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.566 [2024-04-15 18:18:00.315577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.566 [2024-04-15 18:18:00.315597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.566 [2024-04-15 18:18:00.329238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.566 [2024-04-15 18:18:00.329272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.566 [2024-04-15 18:18:00.329292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.566 [2024-04-15 18:18:00.343941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.566 [2024-04-15 18:18:00.343976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.566 [2024-04-15 18:18:00.343996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.566 [2024-04-15 18:18:00.359054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.566 [2024-04-15 18:18:00.359101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.566 [2024-04-15 18:18:00.359121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.566 [2024-04-15 18:18:00.372115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.566 [2024-04-15 18:18:00.372148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.566 [2024-04-15 18:18:00.372168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.566 [2024-04-15 18:18:00.387857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.566 [2024-04-15 18:18:00.387891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.566 [2024-04-15 18:18:00.387911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.566 [2024-04-15 18:18:00.403146] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.566 [2024-04-15 18:18:00.403181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.566 [2024-04-15 18:18:00.403201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.566 [2024-04-15 18:18:00.416190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.566 [2024-04-15 18:18:00.416224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.566 [2024-04-15 18:18:00.416244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.566 [2024-04-15 18:18:00.432932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.566 [2024-04-15 18:18:00.432967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.566 [2024-04-15 18:18:00.432986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.566 [2024-04-15 18:18:00.447211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.566 [2024-04-15 18:18:00.447246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.566 [2024-04-15 18:18:00.447266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.566 [2024-04-15 18:18:00.460096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.566 [2024-04-15 18:18:00.460135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.566 [2024-04-15 18:18:00.460154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.566 [2024-04-15 18:18:00.476565] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.566 [2024-04-15 18:18:00.476599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.566 [2024-04-15 18:18:00.476619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.566 [2024-04-15 18:18:00.490673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.566 [2024-04-15 18:18:00.490709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.566 [2024-04-15 18:18:00.490729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.566 [2024-04-15 18:18:00.506085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.566 [2024-04-15 18:18:00.506119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.566 [2024-04-15 18:18:00.506139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.566 [2024-04-15 18:18:00.518145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.566 [2024-04-15 18:18:00.518184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.566 [2024-04-15 18:18:00.518204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.825 [2024-04-15 18:18:00.532942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.826 [2024-04-15 18:18:00.532978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.826 [2024-04-15 18:18:00.533005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.826 [2024-04-15 18:18:00.548850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.826 [2024-04-15 18:18:00.548884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.826 [2024-04-15 18:18:00.548904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.826 [2024-04-15 18:18:00.562324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.826 [2024-04-15 18:18:00.562358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.826 [2024-04-15 18:18:00.562378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.826 [2024-04-15 18:18:00.575100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.826 [2024-04-15 18:18:00.575133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.826 [2024-04-15 18:18:00.575153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.826 [2024-04-15 18:18:00.589038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.826 [2024-04-15 18:18:00.589080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.826 [2024-04-15 18:18:00.589101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.826 [2024-04-15 18:18:00.607404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.826 [2024-04-15 18:18:00.607438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.826 [2024-04-15 18:18:00.607458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.826 [2024-04-15 18:18:00.618554] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.826 [2024-04-15 18:18:00.618589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.826 [2024-04-15 18:18:00.618608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.826 [2024-04-15 18:18:00.634598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.826 [2024-04-15 18:18:00.634632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.826 [2024-04-15 18:18:00.634651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.826 [2024-04-15 18:18:00.646430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.826 [2024-04-15 18:18:00.646465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.826 [2024-04-15 18:18:00.646485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.826 [2024-04-15 18:18:00.661842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.826 [2024-04-15 18:18:00.661887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.826 [2024-04-15 18:18:00.661909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.826 [2024-04-15 18:18:00.679129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.826 [2024-04-15 18:18:00.679163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.826 [2024-04-15 18:18:00.679182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.826 [2024-04-15 18:18:00.694910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.826 [2024-04-15 18:18:00.694944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.826 [2024-04-15 18:18:00.694964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.826 [2024-04-15 18:18:00.710149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.826 [2024-04-15 18:18:00.710182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.826 [2024-04-15 18:18:00.710201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.826 [2024-04-15 18:18:00.722867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.826 [2024-04-15 18:18:00.722900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.826 [2024-04-15 18:18:00.722920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.826 [2024-04-15 18:18:00.737831] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.826 [2024-04-15 18:18:00.737865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.826 [2024-04-15 18:18:00.737884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.826 [2024-04-15 18:18:00.752488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.826 [2024-04-15 18:18:00.752522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.826 [2024-04-15 18:18:00.752542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.826 [2024-04-15 18:18:00.766357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:11.826 [2024-04-15 18:18:00.766391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.826 [2024-04-15 18:18:00.766410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.085 [2024-04-15 18:18:00.779590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:12.085 [2024-04-15 18:18:00.779626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.085 [2024-04-15 18:18:00.779659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.085 [2024-04-15 18:18:00.794543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:12.085 [2024-04-15 18:18:00.794578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.085 [2024-04-15 18:18:00.794598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.085 [2024-04-15 18:18:00.808911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:12.085 [2024-04-15 18:18:00.808945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.085 [2024-04-15 18:18:00.808965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.085 [2024-04-15 18:18:00.820556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127e8b0) 00:31:12.085 [2024-04-15 18:18:00.820591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.085 [2024-04-15 18:18:00.820610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.085 00:31:12.085 Latency(us) 00:31:12.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.085 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:12.085 nvme0n1 : 2.00 17829.29 69.65 0.00 0.00 7168.80 3325.35 19126.80 00:31:12.085 =================================================================================================================== 00:31:12.085 Total : 17829.29 69.65 0.00 0.00 7168.80 3325.35 19126.80 00:31:12.085 0 00:31:12.085 18:18:00 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:12.085 18:18:00 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:12.085 18:18:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:12.085 18:18:00 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:12.085 | .driver_specific 00:31:12.085 | .nvme_error 00:31:12.085 | .status_code 00:31:12.085 | .command_transient_transport_error' 00:31:12.345 18:18:01 -- host/digest.sh@71 -- # (( 140 > 0 )) 00:31:12.345 18:18:01 -- host/digest.sh@73 -- # killprocess 3453660 00:31:12.345 18:18:01 -- common/autotest_common.sh@936 -- # '[' -z 3453660 ']' 00:31:12.345 18:18:01 -- common/autotest_common.sh@940 -- # kill -0 3453660 00:31:12.345 18:18:01 -- common/autotest_common.sh@941 -- # uname 00:31:12.345 18:18:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:12.345 18:18:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3453660 00:31:12.345 18:18:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:12.345 18:18:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:12.345 18:18:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3453660' 00:31:12.345 killing process with pid 3453660 00:31:12.345 18:18:01 -- common/autotest_common.sh@955 -- # kill 3453660 00:31:12.345 Received shutdown signal, test time was about 2.000000 seconds 00:31:12.345 00:31:12.345 Latency(us) 00:31:12.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.345 =================================================================================================================== 00:31:12.345 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:12.345 18:18:01 -- common/autotest_common.sh@960 -- # wait 3453660 00:31:12.604 18:18:01 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:31:12.604 18:18:01 -- host/digest.sh@54 -- # local rw bs qd 00:31:12.604 18:18:01 -- host/digest.sh@56 -- # rw=randread 00:31:12.604 18:18:01 -- host/digest.sh@56 -- # bs=131072 00:31:12.604 18:18:01 -- host/digest.sh@56 -- # qd=16 00:31:12.604 18:18:01 -- host/digest.sh@58 -- # bperfpid=3454210 00:31:12.604 18:18:01 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:12.604 18:18:01 -- host/digest.sh@60 -- # waitforlisten 3454210 /var/tmp/bperf.sock 00:31:12.604 18:18:01 -- common/autotest_common.sh@817 -- # '[' -z 3454210 ']' 00:31:12.604 18:18:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:12.604 18:18:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:12.604 18:18:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:12.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:12.604 18:18:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:12.604 18:18:01 -- common/autotest_common.sh@10 -- # set +x 00:31:12.604 [2024-04-15 18:18:01.441595] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:31:12.604 [2024-04-15 18:18:01.441700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454210 ] 00:31:12.604 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:12.604 Zero copy mechanism will not be used. 00:31:12.604 EAL: No free 2048 kB hugepages reported on node 1 00:31:12.604 [2024-04-15 18:18:01.511660] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.862 [2024-04-15 18:18:01.604237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.862 18:18:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:12.862 18:18:01 -- common/autotest_common.sh@850 -- # return 0 00:31:12.862 18:18:01 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:12.862 18:18:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:13.429 18:18:02 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:13.429 18:18:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.429 18:18:02 -- common/autotest_common.sh@10 -- # set +x 00:31:13.429 18:18:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.429 18:18:02 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:13.429 18:18:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:13.997 nvme0n1 00:31:13.997 18:18:02 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:13.997 18:18:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.997 18:18:02 -- common/autotest_common.sh@10 -- # set +x 00:31:13.997 18:18:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.997 18:18:02 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:13.997 18:18:02 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:14.287 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:14.287 Zero copy mechanism will not be used. 00:31:14.287 Running I/O for 2 seconds... 00:31:14.287 [2024-04-15 18:18:03.103721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.287 [2024-04-15 18:18:03.103780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.287 [2024-04-15 18:18:03.103804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.287 [2024-04-15 18:18:03.114882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.287 [2024-04-15 18:18:03.114919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.287 [2024-04-15 18:18:03.114952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.287 [2024-04-15 18:18:03.126139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.287 [2024-04-15 18:18:03.126174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.287 [2024-04-15 18:18:03.126194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.287 [2024-04-15 18:18:03.137300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.287 [2024-04-15 18:18:03.137344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.287 [2024-04-15 18:18:03.137364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.287 [2024-04-15 18:18:03.148884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.287 [2024-04-15 18:18:03.148921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.287 [2024-04-15 18:18:03.148944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.287 [2024-04-15 18:18:03.160222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.287 [2024-04-15 18:18:03.160257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.287 [2024-04-15 18:18:03.160278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.287 [2024-04-15 18:18:03.171275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.287 [2024-04-15 18:18:03.171321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.287 [2024-04-15 18:18:03.171340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.287 [2024-04-15 18:18:03.182301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.287 [2024-04-15 18:18:03.182335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.287 [2024-04-15 18:18:03.182355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.287 [2024-04-15 18:18:03.193417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.287 [2024-04-15 18:18:03.193451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.287 [2024-04-15 18:18:03.193471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.287 [2024-04-15 18:18:03.204528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.287 [2024-04-15 18:18:03.204562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.287 [2024-04-15 18:18:03.204582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.287 [2024-04-15 18:18:03.216492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.287 [2024-04-15 18:18:03.216529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.287 [2024-04-15 18:18:03.216549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.546 [2024-04-15 18:18:03.227301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.546 [2024-04-15 18:18:03.227346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.546 [2024-04-15 18:18:03.227366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.546 [2024-04-15 18:18:03.238390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.546 [2024-04-15 18:18:03.238424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.546 [2024-04-15 18:18:03.238443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.546 [2024-04-15 18:18:03.249824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.546 [2024-04-15 18:18:03.249859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.546 [2024-04-15 18:18:03.249880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.546 [2024-04-15 18:18:03.260706] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.260741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.260761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.271701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.271735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.271755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.282673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.282707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.282726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.293550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.293584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.293603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.304602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.304635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.304661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.315625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.315659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.315678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.326527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.326561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.326580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.337433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.337465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.337485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.348403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.348435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.348454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.359436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.359469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.359489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.370449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.370482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.370502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.381349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.381382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.381402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.392003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.392036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.392056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.403071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.403116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.403136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.414170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.414204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.414224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.425256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.425291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.425311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.436282] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.436316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.436336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.447361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.447394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.447413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.458230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.458264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.458283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.469272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.469305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.469325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.480347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.480380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.480399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.547 [2024-04-15 18:18:03.491457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.547 [2024-04-15 18:18:03.491490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.547 [2024-04-15 18:18:03.491509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.806 [2024-04-15 18:18:03.502304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.806 [2024-04-15 18:18:03.502345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.806 [2024-04-15 18:18:03.502379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.806 [2024-04-15 18:18:03.513299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.806 [2024-04-15 18:18:03.513343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.806 [2024-04-15 18:18:03.513362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.806 [2024-04-15 18:18:03.524371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.806 [2024-04-15 18:18:03.524406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.524425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.535308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.535341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.535362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.546350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.546384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.546403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.557301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.557342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.557364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.568264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.568303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.568333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.579362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.579396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.579415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.590221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.590255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.590282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.601218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.601251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.601271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.612495] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.612530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.612549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.623351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.623385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.623408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.634095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.634137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.634156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.645193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.645228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.645247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.656146] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.656180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.656198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.667019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.667052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.667082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.678029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.678069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.678090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.689091] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.689131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.689151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.700130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.700163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.700194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.711171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.711204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.711223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.722091] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.722129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.722149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.733114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.733149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.733169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.744434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.744467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.744487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.807 [2024-04-15 18:18:03.755393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:14.807 [2024-04-15 18:18:03.755427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.807 [2024-04-15 18:18:03.755447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.066 [2024-04-15 18:18:03.766555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.066 [2024-04-15 18:18:03.766591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.066 [2024-04-15 18:18:03.766612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.066 [2024-04-15 18:18:03.777731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.066 [2024-04-15 18:18:03.777765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.066 [2024-04-15 18:18:03.777791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.066 [2024-04-15 18:18:03.788791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.066 [2024-04-15 18:18:03.788824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.066 [2024-04-15 18:18:03.788843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.066 [2024-04-15 18:18:03.800043] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.066 [2024-04-15 18:18:03.800084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.066 [2024-04-15 18:18:03.800104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.066 [2024-04-15 18:18:03.810898] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:03.810931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:03.810950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:03.821753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:03.821787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:03.821806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:03.832895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:03.832928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:03.832948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:03.843760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:03.843792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:03.843812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:03.854663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:03.854696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:03.854715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:03.865667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:03.865700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:03.865719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:03.876632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:03.876671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:03.876691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:03.887772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:03.887805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:03.887825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:03.898708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:03.898741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:03.898759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:03.909580] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:03.909613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:03.909631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:03.920567] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:03.920600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:03.920619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:03.931563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:03.931596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:03.931615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:03.942522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:03.942554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:03.942573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:03.953314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:03.953348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:03.953367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:03.964370] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:03.964403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:03.964422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:03.975378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:03.975411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:03.975431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:03.986561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:03.986605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:03.986626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:03.997474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:03.997507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:03.997526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:04.008389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:04.008422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:04.008441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.067 [2024-04-15 18:18:04.019473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.067 [2024-04-15 18:18:04.019509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.067 [2024-04-15 18:18:04.019529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.326 [2024-04-15 18:18:04.030718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.030754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.030774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.041781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.041825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.041845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.052920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.052954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.052973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.064162] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.064196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.064223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.075345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.075378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.075397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.086641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.086675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.086694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.097713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.097745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.097764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.108801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.108834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.108854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.119800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.119832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.119851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.130893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.130926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.130950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.141936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.141969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.141988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.152980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.153016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.153035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.163955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.163994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.164014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.174945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.174977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.174996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.185996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.186029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.186048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.196983] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.197016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.197034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.208101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.208133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.208152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.219115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.219147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.219167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.230377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.230410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.230429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.241526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.241558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.241578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.252584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.252616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.252636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.263852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.263885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.263904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.327 [2024-04-15 18:18:04.275172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.327 [2024-04-15 18:18:04.275206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.327 [2024-04-15 18:18:04.275226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.586 [2024-04-15 18:18:04.286293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.586 [2024-04-15 18:18:04.286329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.286349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.297325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.297359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.297378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.308506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.308539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.308558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.319496] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.319529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.319548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.330789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.330822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.330841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.341972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.342004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.342023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.353105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.353139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.353165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.363950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.363983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.364002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.374920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.374953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.374972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.385924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.385956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.385975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.397106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.397138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.397157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.408085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.408125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.408144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.419080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.419118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.419137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.430346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.430378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.430397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.441825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.441857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.441876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.453013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.453045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.453097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.464275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.464311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.464330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.475433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.475466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.475485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.486802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.486834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.486853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.497912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.497952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.497971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.508929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.508961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.508981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.520166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.520198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.520217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.587 [2024-04-15 18:18:04.531411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.587 [2024-04-15 18:18:04.531444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.587 [2024-04-15 18:18:04.531464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.542538] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.542574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.542607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.553764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.553798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.553819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.564998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.565031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.565051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.576015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.576048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.576075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.587176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.587209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.587228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.598277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.598309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.598329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.609255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.609289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.609308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.620279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.620320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.620339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.631311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.631344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.631363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.642403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.642443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.642463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.653575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.653607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.653627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.664725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.664760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.664779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.675971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.676004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.676023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.687141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.687177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.687196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.698188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.698221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.698240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.709227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.709260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.709280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.720356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.720389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.720408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.731395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.731428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.731447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.742502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.742545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.742564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.753848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.753881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.753900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.765168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.765201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.765220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.776222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.776255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.776275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.787692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.787739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.787758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.847 [2024-04-15 18:18:04.799827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:15.847 [2024-04-15 18:18:04.799864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.847 [2024-04-15 18:18:04.799884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:16.107 [2024-04-15 18:18:04.811957] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:04.811994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:04.812014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:04.823245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:04.823277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:04.823297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:04.834236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:04.834270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:04.834298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:04.845231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:04.845264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:04.845283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:04.856417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:04.856451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:04.856470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:04.867770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:04.867803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:04.867823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:04.878754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:04.878788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:04.878807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:04.889804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:04.889840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:04.889859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:04.900659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:04.900692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:04.900712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:04.911676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:04.911709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:04.911728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:04.922722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:04.922754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:04.922773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:04.933620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:04.933652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:04.933671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:04.944572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:04.944605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:04.944623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:04.955384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:04.955418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:04.955438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:04.966301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:04.966338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:04.966358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:04.977345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:04.977379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:04.977398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:04.988202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:04.988235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:04.988254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:04.999139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:04.999172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:04.999191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:05.010407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:05.010440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:05.010459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:05.021414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:05.021447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:05.021474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:05.032385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:05.032418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:05.032437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:05.043386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:05.043419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:05.043438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.108 [2024-04-15 18:18:05.054310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.108 [2024-04-15 18:18:05.054343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.108 [2024-04-15 18:18:05.054363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:16.368 [2024-04-15 18:18:05.065872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.368 [2024-04-15 18:18:05.065909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.368 [2024-04-15 18:18:05.065929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:16.368 [2024-04-15 18:18:05.076939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.368 [2024-04-15 18:18:05.076973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.368 [2024-04-15 18:18:05.076994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:16.368 [2024-04-15 18:18:05.087981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.368 [2024-04-15 18:18:05.088018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.368 [2024-04-15 18:18:05.088038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.368 [2024-04-15 18:18:05.098952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x83d700) 00:31:16.368 [2024-04-15 18:18:05.098985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.368 [2024-04-15 18:18:05.099014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:16.368 00:31:16.368 Latency(us) 00:31:16.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:16.368 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:16.368 nvme0n1 : 2.01 2798.00 349.75 0.00 0.00 5712.39 5121.52 12913.02 00:31:16.368 =================================================================================================================== 00:31:16.368 Total : 2798.00 349.75 0.00 0.00 5712.39 5121.52 12913.02 00:31:16.368 0 00:31:16.368 18:18:05 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:16.368 18:18:05 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:16.368 18:18:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:16.368 18:18:05 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:16.368 | .driver_specific 00:31:16.368 | .nvme_error 00:31:16.368 | .status_code 00:31:16.368 | .command_transient_transport_error' 00:31:16.628 18:18:05 -- host/digest.sh@71 -- # (( 181 > 0 )) 00:31:16.628 18:18:05 -- host/digest.sh@73 -- # killprocess 3454210 00:31:16.628 18:18:05 -- common/autotest_common.sh@936 -- # '[' -z 3454210 ']' 00:31:16.628 18:18:05 -- common/autotest_common.sh@940 -- # kill -0 3454210 00:31:16.628 18:18:05 -- common/autotest_common.sh@941 -- # uname 00:31:16.628 18:18:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:16.628 18:18:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3454210 00:31:16.628 18:18:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:16.628 18:18:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:16.628 18:18:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3454210' 00:31:16.628 killing process with pid 3454210 00:31:16.628 18:18:05 -- common/autotest_common.sh@955 -- # kill 3454210 00:31:16.628 Received shutdown signal, test time was about 2.000000 seconds 00:31:16.628 00:31:16.628 Latency(us) 00:31:16.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:16.628 =================================================================================================================== 00:31:16.628 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:16.628 18:18:05 -- common/autotest_common.sh@960 -- # wait 3454210 00:31:16.886 18:18:05 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:31:16.886 18:18:05 -- host/digest.sh@54 -- # local rw bs qd 00:31:16.886 18:18:05 -- host/digest.sh@56 -- # rw=randwrite 00:31:16.886 18:18:05 -- host/digest.sh@56 -- # bs=4096 00:31:16.886 18:18:05 -- host/digest.sh@56 -- # qd=128 00:31:16.886 18:18:05 -- host/digest.sh@58 -- # bperfpid=3454720 00:31:16.887 18:18:05 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:31:16.887 18:18:05 -- host/digest.sh@60 -- # waitforlisten 3454720 /var/tmp/bperf.sock 00:31:16.887 18:18:05 -- common/autotest_common.sh@817 -- # '[' -z 3454720 ']' 00:31:16.887 18:18:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:16.887 18:18:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:16.887 18:18:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:16.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:16.887 18:18:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:16.887 18:18:05 -- common/autotest_common.sh@10 -- # set +x 00:31:17.145 [2024-04-15 18:18:05.874154] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:31:17.145 [2024-04-15 18:18:05.874236] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454720 ] 00:31:17.145 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.145 [2024-04-15 18:18:05.942293] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.145 [2024-04-15 18:18:06.033867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.403 18:18:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:17.403 18:18:06 -- common/autotest_common.sh@850 -- # return 0 00:31:17.403 18:18:06 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:17.403 18:18:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:17.662 18:18:06 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:17.662 18:18:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:17.662 18:18:06 -- common/autotest_common.sh@10 -- # set +x 00:31:17.662 18:18:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:17.662 18:18:06 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:17.662 18:18:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:18.228 nvme0n1 00:31:18.228 18:18:07 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:18.228 18:18:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:18.228 18:18:07 -- common/autotest_common.sh@10 -- # set +x 00:31:18.228 18:18:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:18.228 18:18:07 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:18.228 18:18:07 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:18.488 Running I/O for 2 seconds... 00:31:18.488 [2024-04-15 18:18:07.235301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190edd58 00:31:18.488 [2024-04-15 18:18:07.236418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.488 [2024-04-15 18:18:07.236461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.488 [2024-04-15 18:18:07.247797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fa3a0 00:31:18.488 [2024-04-15 18:18:07.248888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.488 [2024-04-15 18:18:07.248922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:18.488 [2024-04-15 18:18:07.261644] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e3d08 00:31:18.488 [2024-04-15 18:18:07.262913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.488 [2024-04-15 18:18:07.262946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:18.488 [2024-04-15 18:18:07.275540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e49b0 00:31:18.488 [2024-04-15 18:18:07.276975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.488 [2024-04-15 18:18:07.277007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:18.488 [2024-04-15 18:18:07.289358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e7c50 00:31:18.488 [2024-04-15 18:18:07.290998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.488 [2024-04-15 18:18:07.291031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:18.488 [2024-04-15 18:18:07.303183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fa3a0 00:31:18.488 [2024-04-15 18:18:07.304984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.488 [2024-04-15 18:18:07.305026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:18.488 [2024-04-15 18:18:07.316943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f0bc0 00:31:18.488 [2024-04-15 18:18:07.318945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.488 [2024-04-15 18:18:07.318986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:18.488 [2024-04-15 18:18:07.330722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ecc78 00:31:18.488 [2024-04-15 18:18:07.332901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.488 [2024-04-15 18:18:07.332933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:18.488 [2024-04-15 18:18:07.340071] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190eaef0 00:31:18.488 [2024-04-15 18:18:07.341001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.488 [2024-04-15 18:18:07.341033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:18.488 [2024-04-15 18:18:07.354994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f96f8 00:31:18.488 [2024-04-15 18:18:07.356137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.488 [2024-04-15 18:18:07.356169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:18.488 [2024-04-15 18:18:07.367188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ec408 00:31:18.488 [2024-04-15 18:18:07.369141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.488 [2024-04-15 18:18:07.369173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:18.488 [2024-04-15 18:18:07.378803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e5220 00:31:18.488 [2024-04-15 18:18:07.379724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.488 [2024-04-15 18:18:07.379756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.488 [2024-04-15 18:18:07.392620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f4b08 00:31:18.488 [2024-04-15 18:18:07.393714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.488 [2024-04-15 18:18:07.393746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:18.488 [2024-04-15 18:18:07.406387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e27f0 00:31:18.488 [2024-04-15 18:18:07.407661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.488 [2024-04-15 18:18:07.407692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.488 [2024-04-15 18:18:07.420151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e2c28 00:31:18.488 [2024-04-15 18:18:07.421599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.488 [2024-04-15 18:18:07.421630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:18.488 [2024-04-15 18:18:07.433900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fe2e8 00:31:18.488 [2024-04-15 18:18:07.435536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.488 [2024-04-15 18:18:07.435568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.747 [2024-04-15 18:18:07.447771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f4b08 00:31:18.747 [2024-04-15 18:18:07.449588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-04-15 18:18:07.449621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:18.747 [2024-04-15 18:18:07.461569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fc560 00:31:18.747 [2024-04-15 18:18:07.463556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-04-15 18:18:07.463589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.747 [2024-04-15 18:18:07.475361] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e9168 00:31:18.747 [2024-04-15 18:18:07.477525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-04-15 18:18:07.477557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:18.747 [2024-04-15 18:18:07.484708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fb048 00:31:18.747 [2024-04-15 18:18:07.485656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-04-15 18:18:07.485687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:18.747 [2024-04-15 18:18:07.499840] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f0ff8 00:31:18.747 [2024-04-15 18:18:07.500969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-04-15 18:18:07.501000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:18.747 [2024-04-15 18:18:07.512072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e3d08 00:31:18.747 [2024-04-15 18:18:07.514050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-04-15 18:18:07.514089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:18.747 [2024-04-15 18:18:07.523655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ee190 00:31:18.747 [2024-04-15 18:18:07.524558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-04-15 18:18:07.524588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:18.747 [2024-04-15 18:18:07.537409] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ecc78 00:31:18.747 [2024-04-15 18:18:07.538480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-04-15 18:18:07.538511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:18.747 [2024-04-15 18:18:07.551176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e5220 00:31:18.747 [2024-04-15 18:18:07.552431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-04-15 18:18:07.552463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:18.748 [2024-04-15 18:18:07.564960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190df118 00:31:18.748 [2024-04-15 18:18:07.566417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-04-15 18:18:07.566448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:18.748 [2024-04-15 18:18:07.578697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f6cc8 00:31:18.748 [2024-04-15 18:18:07.580330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-04-15 18:18:07.580361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:18.748 [2024-04-15 18:18:07.592493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ecc78 00:31:18.748 [2024-04-15 18:18:07.594286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-04-15 18:18:07.594317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:18.748 [2024-04-15 18:18:07.606281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190de470 00:31:18.748 [2024-04-15 18:18:07.608250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-04-15 18:18:07.608281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:18.748 [2024-04-15 18:18:07.620074] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f9b30 00:31:18.748 [2024-04-15 18:18:07.622237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-04-15 18:18:07.622268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:18.748 [2024-04-15 18:18:07.629452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f4f40 00:31:18.748 [2024-04-15 18:18:07.630379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-04-15 18:18:07.630410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:18.748 [2024-04-15 18:18:07.643215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fe2e8 00:31:18.748 [2024-04-15 18:18:07.644305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-04-15 18:18:07.644345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:18.748 [2024-04-15 18:18:07.657007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f6458 00:31:18.748 [2024-04-15 18:18:07.658281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-04-15 18:18:07.658330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:18.748 [2024-04-15 18:18:07.670377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f2948 00:31:18.748 [2024-04-15 18:18:07.671675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-04-15 18:18:07.671708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:18.748 [2024-04-15 18:18:07.683481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f7970 00:31:18.748 [2024-04-15 18:18:07.684752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-04-15 18:18:07.684783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:18.748 [2024-04-15 18:18:07.696680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f1ca0 00:31:18.748 [2024-04-15 18:18:07.697961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-04-15 18:18:07.697993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:19.006 [2024-04-15 18:18:07.709911] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f0bc0 00:31:19.006 [2024-04-15 18:18:07.711198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.006 [2024-04-15 18:18:07.711231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:19.006 [2024-04-15 18:18:07.723081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190eb760 00:31:19.006 [2024-04-15 18:18:07.724589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.006 [2024-04-15 18:18:07.724621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:19.006 [2024-04-15 18:18:07.736464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e7818 00:31:19.006 [2024-04-15 18:18:07.737774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.006 [2024-04-15 18:18:07.737805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:19.006 [2024-04-15 18:18:07.750209] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190dfdc0 00:31:19.006 [2024-04-15 18:18:07.751656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.006 [2024-04-15 18:18:07.751688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:19.006 [2024-04-15 18:18:07.763993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e8088 00:31:19.006 [2024-04-15 18:18:07.765635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.006 [2024-04-15 18:18:07.765667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:19.006 [2024-04-15 18:18:07.776317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e84c0 00:31:19.006 [2024-04-15 18:18:07.777757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.006 [2024-04-15 18:18:07.777788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:19.006 [2024-04-15 18:18:07.788986] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f4298 00:31:19.006 [2024-04-15 18:18:07.790437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.006 [2024-04-15 18:18:07.790469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:19.006 [2024-04-15 18:18:07.802802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e01f8 00:31:19.006 [2024-04-15 18:18:07.804423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.006 [2024-04-15 18:18:07.804454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:19.006 [2024-04-15 18:18:07.816596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e3498 00:31:19.006 [2024-04-15 18:18:07.818387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.006 [2024-04-15 18:18:07.818419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:19.007 [2024-04-15 18:18:07.830344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190df118 00:31:19.007 [2024-04-15 18:18:07.832309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.007 [2024-04-15 18:18:07.832340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:19.007 [2024-04-15 18:18:07.842603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ecc78 00:31:19.007 [2024-04-15 18:18:07.844026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.007 [2024-04-15 18:18:07.844064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:19.007 [2024-04-15 18:18:07.855526] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f57b0 00:31:19.007 [2024-04-15 18:18:07.856955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.007 [2024-04-15 18:18:07.856986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:19.007 [2024-04-15 18:18:07.868668] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fc560 00:31:19.007 [2024-04-15 18:18:07.870102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.007 [2024-04-15 18:18:07.870132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:19.007 [2024-04-15 18:18:07.880608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e7c50 00:31:19.007 [2024-04-15 18:18:07.882534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.007 [2024-04-15 18:18:07.882565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:19.007 [2024-04-15 18:18:07.892148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e8d30 00:31:19.007 [2024-04-15 18:18:07.893033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.007 [2024-04-15 18:18:07.893069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:19.007 [2024-04-15 18:18:07.905937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fb048 00:31:19.007 [2024-04-15 18:18:07.907013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.007 [2024-04-15 18:18:07.907044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:19.007 [2024-04-15 18:18:07.919788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f8e88 00:31:19.007 [2024-04-15 18:18:07.921073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.007 [2024-04-15 18:18:07.921108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:19.007 [2024-04-15 18:18:07.933632] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190eaab8 00:31:19.007 [2024-04-15 18:18:07.935089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.007 [2024-04-15 18:18:07.935131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:19.007 [2024-04-15 18:18:07.947554] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f2948 00:31:19.007 [2024-04-15 18:18:07.949199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.007 [2024-04-15 18:18:07.949231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:19.265 [2024-04-15 18:18:07.961426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fb048 00:31:19.265 [2024-04-15 18:18:07.963252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.265 [2024-04-15 18:18:07.963286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:19.265 [2024-04-15 18:18:07.975287] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f31b8 00:31:19.265 [2024-04-15 18:18:07.977251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.265 [2024-04-15 18:18:07.977284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:19.265 [2024-04-15 18:18:07.989122] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f5378 00:31:19.265 [2024-04-15 18:18:07.991351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.265 [2024-04-15 18:18:07.991383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:19.265 [2024-04-15 18:18:08.000742] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ee190 00:31:19.265 [2024-04-15 18:18:08.002204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.265 [2024-04-15 18:18:08.002242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:19.265 [2024-04-15 18:18:08.013945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ee190 00:31:19.266 [2024-04-15 18:18:08.015410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.266 [2024-04-15 18:18:08.015442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:19.266 [2024-04-15 18:18:08.027642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f7970 00:31:19.266 [2024-04-15 18:18:08.029259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.266 [2024-04-15 18:18:08.029290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:19.266 [2024-04-15 18:18:08.038818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190eb760 00:31:19.266 [2024-04-15 18:18:08.039548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.266 [2024-04-15 18:18:08.039579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:19.266 [2024-04-15 18:18:08.052696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ff3c8 00:31:19.266 [2024-04-15 18:18:08.053608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.266 [2024-04-15 18:18:08.053640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:19.266 [2024-04-15 18:18:08.066530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190de8a8 00:31:19.266 [2024-04-15 18:18:08.067653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.266 [2024-04-15 18:18:08.067684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:19.266 [2024-04-15 18:18:08.078774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f4f40 00:31:19.266 [2024-04-15 18:18:08.080709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.266 [2024-04-15 18:18:08.080740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:19.266 [2024-04-15 18:18:08.090422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e7818 00:31:19.266 [2024-04-15 18:18:08.091308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.266 [2024-04-15 18:18:08.091339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:19.266 [2024-04-15 18:18:08.104355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ed920 00:31:19.266 [2024-04-15 18:18:08.105476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.266 [2024-04-15 18:18:08.105508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:19.266 [2024-04-15 18:18:08.118211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ff3c8 00:31:19.266 [2024-04-15 18:18:08.119467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.266 [2024-04-15 18:18:08.119499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:19.266 [2024-04-15 18:18:08.132095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f1868 00:31:19.266 [2024-04-15 18:18:08.133572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.266 [2024-04-15 18:18:08.133604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:19.266 [2024-04-15 18:18:08.145931] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f3e60 00:31:19.266 [2024-04-15 18:18:08.147581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.266 [2024-04-15 18:18:08.147612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:19.266 [2024-04-15 18:18:08.159852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ed920 00:31:19.266 [2024-04-15 18:18:08.161687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.266 [2024-04-15 18:18:08.161718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:19.266 [2024-04-15 18:18:08.173709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f8a50 00:31:19.266 [2024-04-15 18:18:08.175718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.266 [2024-04-15 18:18:08.175750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:19.266 [2024-04-15 18:18:08.187542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f6cc8 00:31:19.266 [2024-04-15 18:18:08.189719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.266 [2024-04-15 18:18:08.189750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:19.266 [2024-04-15 18:18:08.196906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e1b48 00:31:19.266 [2024-04-15 18:18:08.197849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.266 [2024-04-15 18:18:08.197880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:19.266 [2024-04-15 18:18:08.209445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e49b0 00:31:19.266 [2024-04-15 18:18:08.210345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.266 [2024-04-15 18:18:08.210375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:19.524 [2024-04-15 18:18:08.223407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fcdd0 00:31:19.524 [2024-04-15 18:18:08.224528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.524 [2024-04-15 18:18:08.224563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:19.524 [2024-04-15 18:18:08.237366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fef90 00:31:19.525 [2024-04-15 18:18:08.238623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.238656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:19.525 [2024-04-15 18:18:08.251224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f7100 00:31:19.525 [2024-04-15 18:18:08.252692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.252725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:19.525 [2024-04-15 18:18:08.265217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f4298 00:31:19.525 [2024-04-15 18:18:08.266817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.266848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:19.525 [2024-04-15 18:18:08.279087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fcdd0 00:31:19.525 [2024-04-15 18:18:08.280905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.280936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:19.525 [2024-04-15 18:18:08.292946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e5658 00:31:19.525 [2024-04-15 18:18:08.294907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.294939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:19.525 [2024-04-15 18:18:08.306835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f1ca0 00:31:19.525 [2024-04-15 18:18:08.309009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.309040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:19.525 [2024-04-15 18:18:08.316223] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f0788 00:31:19.525 [2024-04-15 18:18:08.317124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.317154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:19.525 [2024-04-15 18:18:08.330040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ea248 00:31:19.525 [2024-04-15 18:18:08.331146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.331177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:19.525 [2024-04-15 18:18:08.343967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f2510 00:31:19.525 [2024-04-15 18:18:08.345208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.345245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:19.525 [2024-04-15 18:18:08.357799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f9b30 00:31:19.525 [2024-04-15 18:18:08.359217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.359249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:19.525 [2024-04-15 18:18:08.371681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190de8a8 00:31:19.525 [2024-04-15 18:18:08.373283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.373313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:19.525 [2024-04-15 18:18:08.382872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e3d08 00:31:19.525 [2024-04-15 18:18:08.383603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.383633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:19.525 [2024-04-15 18:18:08.396709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fef90 00:31:19.525 [2024-04-15 18:18:08.397652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.397683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:19.525 [2024-04-15 18:18:08.410554] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fe2e8 00:31:19.525 [2024-04-15 18:18:08.411673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.411704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:19.525 [2024-04-15 18:18:08.422871] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ef6a8 00:31:19.525 [2024-04-15 18:18:08.424827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.424857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:19.525 [2024-04-15 18:18:08.436567] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190eaab8 00:31:19.525 [2024-04-15 18:18:08.437904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.437935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:19.525 [2024-04-15 18:18:08.451899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e6738 00:31:19.525 [2024-04-15 18:18:08.453913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.453943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:19.525 [2024-04-15 18:18:08.460941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e6b70 00:31:19.525 [2024-04-15 18:18:08.461864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.461894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:19.525 [2024-04-15 18:18:08.474852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190de470 00:31:19.525 [2024-04-15 18:18:08.475913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.525 [2024-04-15 18:18:08.475945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:19.784 [2024-04-15 18:18:08.488548] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190edd58 00:31:19.784 [2024-04-15 18:18:08.489650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.784 [2024-04-15 18:18:08.489684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:19.784 [2024-04-15 18:18:08.504098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f1868 00:31:19.784 [2024-04-15 18:18:08.505736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.784 [2024-04-15 18:18:08.505768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:19.784 [2024-04-15 18:18:08.515410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e73e0 00:31:19.784 [2024-04-15 18:18:08.516176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.784 [2024-04-15 18:18:08.516207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:19.784 [2024-04-15 18:18:08.530458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f0ff8 00:31:19.784 [2024-04-15 18:18:08.532082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.784 [2024-04-15 18:18:08.532122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:19.784 [2024-04-15 18:18:08.542806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190eea00 00:31:19.784 [2024-04-15 18:18:08.544232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.784 [2024-04-15 18:18:08.544262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:19.784 [2024-04-15 18:18:08.556481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190eb760 00:31:19.784 [2024-04-15 18:18:08.557939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.784 [2024-04-15 18:18:08.557970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:19.784 [2024-04-15 18:18:08.569916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f57b0 00:31:19.784 [2024-04-15 18:18:08.571412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.784 [2024-04-15 18:18:08.571442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:19.784 [2024-04-15 18:18:08.583132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fac10 00:31:19.784 [2024-04-15 18:18:08.584606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.784 [2024-04-15 18:18:08.584636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:19.784 [2024-04-15 18:18:08.596371] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f31b8 00:31:19.784 [2024-04-15 18:18:08.597839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.784 [2024-04-15 18:18:08.597870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:19.784 [2024-04-15 18:18:08.609580] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e3d08 00:31:19.784 [2024-04-15 18:18:08.611046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.784 [2024-04-15 18:18:08.611084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:19.784 [2024-04-15 18:18:08.622820] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f0350 00:31:19.784 [2024-04-15 18:18:08.624249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.784 [2024-04-15 18:18:08.624278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:19.784 [2024-04-15 18:18:08.635197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f8618 00:31:19.784 [2024-04-15 18:18:08.636616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.784 [2024-04-15 18:18:08.636647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:19.784 [2024-04-15 18:18:08.648995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f0ff8 00:31:19.784 [2024-04-15 18:18:08.650598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.784 [2024-04-15 18:18:08.650629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:19.784 [2024-04-15 18:18:08.662802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e6738 00:31:19.784 [2024-04-15 18:18:08.664556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.784 [2024-04-15 18:18:08.664586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:19.785 [2024-04-15 18:18:08.676652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ddc00 00:31:19.785 [2024-04-15 18:18:08.678616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.785 [2024-04-15 18:18:08.678652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:19.785 [2024-04-15 18:18:08.690484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ee190 00:31:19.785 [2024-04-15 18:18:08.692615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.785 [2024-04-15 18:18:08.692651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:19.785 [2024-04-15 18:18:08.699833] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190df988 00:31:19.785 [2024-04-15 18:18:08.700711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.785 [2024-04-15 18:18:08.700741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:19.785 [2024-04-15 18:18:08.713671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e1b48 00:31:19.785 [2024-04-15 18:18:08.714752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.785 [2024-04-15 18:18:08.714783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:19.785 [2024-04-15 18:18:08.727508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ebfd0 00:31:19.785 [2024-04-15 18:18:08.728752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.785 [2024-04-15 18:18:08.728783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:20.044 [2024-04-15 18:18:08.741602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fda78 00:31:20.044 [2024-04-15 18:18:08.743035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.044 [2024-04-15 18:18:08.743075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:20.044 [2024-04-15 18:18:08.755445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ee5c8 00:31:20.044 [2024-04-15 18:18:08.757084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.044 [2024-04-15 18:18:08.757117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:20.044 [2024-04-15 18:18:08.766756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ed920 00:31:20.044 [2024-04-15 18:18:08.767445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.044 [2024-04-15 18:18:08.767476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:20.044 [2024-04-15 18:18:08.780527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190edd58 00:31:20.044 [2024-04-15 18:18:08.781403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.044 [2024-04-15 18:18:08.781434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:20.044 [2024-04-15 18:18:08.794309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190eff18 00:31:20.044 [2024-04-15 18:18:08.795393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.044 [2024-04-15 18:18:08.795424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:20.044 [2024-04-15 18:18:08.806633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e8088 00:31:20.044 [2024-04-15 18:18:08.808604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.044 [2024-04-15 18:18:08.808634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:20.044 [2024-04-15 18:18:08.818263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f4b08 00:31:20.044 [2024-04-15 18:18:08.819194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.044 [2024-04-15 18:18:08.819224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:20.044 [2024-04-15 18:18:08.832173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190efae0 00:31:20.044 [2024-04-15 18:18:08.833203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.044 [2024-04-15 18:18:08.833234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:20.044 [2024-04-15 18:18:08.846077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190edd58 00:31:20.044 [2024-04-15 18:18:08.847338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.044 [2024-04-15 18:18:08.847369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:20.044 [2024-04-15 18:18:08.859940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ed0b0 00:31:20.044 [2024-04-15 18:18:08.861329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.044 [2024-04-15 18:18:08.861368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:20.044 [2024-04-15 18:18:08.874002] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f0ff8 00:31:20.044 [2024-04-15 18:18:08.875611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.044 [2024-04-15 18:18:08.875643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:20.044 [2024-04-15 18:18:08.887901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190efae0 00:31:20.044 [2024-04-15 18:18:08.889668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.044 [2024-04-15 18:18:08.889700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:20.044 [2024-04-15 18:18:08.901756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ddc00 00:31:20.044 [2024-04-15 18:18:08.903708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.044 [2024-04-15 18:18:08.903740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:20.044 [2024-04-15 18:18:08.915590] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ee190 00:31:20.044 [2024-04-15 18:18:08.917765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.044 [2024-04-15 18:18:08.917796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:20.045 [2024-04-15 18:18:08.924990] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fac10 00:31:20.045 [2024-04-15 18:18:08.925862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.045 [2024-04-15 18:18:08.925892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:20.045 [2024-04-15 18:18:08.938852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e2c28 00:31:20.045 [2024-04-15 18:18:08.939961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.045 [2024-04-15 18:18:08.939993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:20.045 [2024-04-15 18:18:08.952764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ebfd0 00:31:20.045 [2024-04-15 18:18:08.954021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.045 [2024-04-15 18:18:08.954053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:20.045 [2024-04-15 18:18:08.966575] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e3498 00:31:20.045 [2024-04-15 18:18:08.968000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.045 [2024-04-15 18:18:08.968031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:20.045 [2024-04-15 18:18:08.980416] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ee5c8 00:31:20.045 [2024-04-15 18:18:08.981989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.045 [2024-04-15 18:18:08.982020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:20.045 [2024-04-15 18:18:08.991653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e6738 00:31:20.045 [2024-04-15 18:18:08.992411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.045 [2024-04-15 18:18:08.992442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:20.304 [2024-04-15 18:18:09.005596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190edd58 00:31:20.304 [2024-04-15 18:18:09.006504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.304 [2024-04-15 18:18:09.006537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:20.304 [2024-04-15 18:18:09.019606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190feb58 00:31:20.304 [2024-04-15 18:18:09.020719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.304 [2024-04-15 18:18:09.020751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:20.304 [2024-04-15 18:18:09.031916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e8088 00:31:20.304 [2024-04-15 18:18:09.033860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.304 [2024-04-15 18:18:09.033897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:20.304 [2024-04-15 18:18:09.043536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fb048 00:31:20.304 [2024-04-15 18:18:09.044450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.304 [2024-04-15 18:18:09.044480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:20.304 [2024-04-15 18:18:09.057466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fa3a0 00:31:20.304 [2024-04-15 18:18:09.058560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.304 [2024-04-15 18:18:09.058590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:20.304 [2024-04-15 18:18:09.071282] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190edd58 00:31:20.304 [2024-04-15 18:18:09.072554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.304 [2024-04-15 18:18:09.072586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:20.304 [2024-04-15 18:18:09.085160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e49b0 00:31:20.304 [2024-04-15 18:18:09.086607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.304 [2024-04-15 18:18:09.086638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:20.304 [2024-04-15 18:18:09.099034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f0ff8 00:31:20.304 [2024-04-15 18:18:09.100667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.304 [2024-04-15 18:18:09.100698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:20.304 [2024-04-15 18:18:09.112891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190fa3a0 00:31:20.304 [2024-04-15 18:18:09.114697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.304 [2024-04-15 18:18:09.114728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:20.304 [2024-04-15 18:18:09.125195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ddc00 00:31:20.304 [2024-04-15 18:18:09.126470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.304 [2024-04-15 18:18:09.126500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:20.304 [2024-04-15 18:18:09.138328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f4298 00:31:20.304 [2024-04-15 18:18:09.139601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.304 [2024-04-15 18:18:09.139631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:20.304 [2024-04-15 18:18:09.151501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190ef6a8 00:31:20.304 [2024-04-15 18:18:09.152780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.304 [2024-04-15 18:18:09.152810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:20.304 [2024-04-15 18:18:09.164723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190e7818 00:31:20.304 [2024-04-15 18:18:09.166003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.304 [2024-04-15 18:18:09.166033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:20.304 [2024-04-15 18:18:09.177909] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190eaab8 00:31:20.304 [2024-04-15 18:18:09.179179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.304 [2024-04-15 18:18:09.179208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:20.304 [2024-04-15 18:18:09.191124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190f2d80 00:31:20.304 [2024-04-15 18:18:09.192399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.304 [2024-04-15 18:18:09.192429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:20.304 [2024-04-15 18:18:09.204365] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190df988 00:31:20.304 [2024-04-15 18:18:09.205638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.304 [2024-04-15 18:18:09.205668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:20.304 [2024-04-15 18:18:09.217581] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821640) with pdu=0x2000190feb58 00:31:20.304 [2024-04-15 18:18:09.218856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:20.305 [2024-04-15 18:18:09.218886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:20.305 00:31:20.305 Latency(us) 00:31:20.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:20.305 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:20.305 nvme0n1 : 2.00 19325.19 75.49 0.00 0.00 6612.60 2852.03 16893.72 00:31:20.305 =================================================================================================================== 00:31:20.305 Total : 19325.19 75.49 0.00 0.00 6612.60 2852.03 16893.72 00:31:20.305 0 00:31:20.305 18:18:09 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:20.305 18:18:09 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:20.305 18:18:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:20.305 18:18:09 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:20.305 | .driver_specific 00:31:20.305 | .nvme_error 00:31:20.305 | .status_code 00:31:20.305 | .command_transient_transport_error' 00:31:20.871 18:18:09 -- host/digest.sh@71 -- # (( 151 > 0 )) 00:31:20.871 18:18:09 -- host/digest.sh@73 -- # killprocess 3454720 00:31:20.871 18:18:09 -- common/autotest_common.sh@936 -- # '[' -z 3454720 ']' 00:31:20.871 18:18:09 -- common/autotest_common.sh@940 -- # kill -0 3454720 00:31:20.871 18:18:09 -- common/autotest_common.sh@941 -- # uname 00:31:20.871 18:18:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:20.871 18:18:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3454720 00:31:20.871 18:18:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:20.871 18:18:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:20.871 18:18:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3454720' 00:31:20.871 killing process with pid 3454720 00:31:20.871 18:18:09 -- common/autotest_common.sh@955 -- # kill 3454720 00:31:20.871 Received shutdown signal, test time was about 2.000000 seconds 00:31:20.871 00:31:20.871 Latency(us) 00:31:20.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:20.871 =================================================================================================================== 00:31:20.871 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:20.871 18:18:09 -- common/autotest_common.sh@960 -- # wait 3454720 00:31:21.130 18:18:09 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:31:21.130 18:18:09 -- host/digest.sh@54 -- # local rw bs qd 00:31:21.130 18:18:09 -- host/digest.sh@56 -- # rw=randwrite 00:31:21.130 18:18:09 -- host/digest.sh@56 -- # bs=131072 00:31:21.130 18:18:09 -- host/digest.sh@56 -- # qd=16 00:31:21.130 18:18:09 -- host/digest.sh@58 -- # bperfpid=3455735 00:31:21.130 18:18:09 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:31:21.130 18:18:09 -- host/digest.sh@60 -- # waitforlisten 3455735 /var/tmp/bperf.sock 00:31:21.130 18:18:09 -- common/autotest_common.sh@817 -- # '[' -z 3455735 ']' 00:31:21.130 18:18:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:21.130 18:18:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:21.130 18:18:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:21.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:21.130 18:18:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:21.130 18:18:09 -- common/autotest_common.sh@10 -- # set +x 00:31:21.130 [2024-04-15 18:18:09.901469] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:31:21.130 [2024-04-15 18:18:09.901555] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455735 ] 00:31:21.130 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:21.130 Zero copy mechanism will not be used. 00:31:21.130 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.130 [2024-04-15 18:18:09.972157] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.130 [2024-04-15 18:18:10.078273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.390 18:18:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:21.390 18:18:10 -- common/autotest_common.sh@850 -- # return 0 00:31:21.390 18:18:10 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:21.390 18:18:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:21.649 18:18:10 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:21.649 18:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:21.649 18:18:10 -- common/autotest_common.sh@10 -- # set +x 00:31:21.649 18:18:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:21.649 18:18:10 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:21.649 18:18:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:22.218 nvme0n1 00:31:22.218 18:18:11 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:22.218 18:18:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:22.218 18:18:11 -- common/autotest_common.sh@10 -- # set +x 00:31:22.218 18:18:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:22.218 18:18:11 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:22.218 18:18:11 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:22.218 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:22.218 Zero copy mechanism will not be used. 00:31:22.218 Running I/O for 2 seconds... 00:31:22.218 [2024-04-15 18:18:11.166728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.218 [2024-04-15 18:18:11.167215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.218 [2024-04-15 18:18:11.167257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.476 [2024-04-15 18:18:11.176388] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.476 [2024-04-15 18:18:11.176748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.476 [2024-04-15 18:18:11.176783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.186318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.186676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.186709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.196424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.196782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.196814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.207178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.207597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.207629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.217760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.218123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.218155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.227623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.227978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.228009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.237072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.237565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.237612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.247317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.247747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.247779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.258221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.258599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.258631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.268834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.269266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.269308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.278865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.278988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.279019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.288226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.288652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.288683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.299169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.299527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.299558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.309441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.309873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.309905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.319448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.319819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.319851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.328919] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.329357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.329389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.339239] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.339666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.339697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.349812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.350280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.350313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.360288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.360647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.360679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.370973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.371426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.371458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.380551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.380934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.380966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.390355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.390712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.390744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.400123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.400482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.400514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.409720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.410083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.410114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.419590] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.419961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.419993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.477 [2024-04-15 18:18:11.429463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.477 [2024-04-15 18:18:11.429828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.477 [2024-04-15 18:18:11.429861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.736 [2024-04-15 18:18:11.439221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.736 [2024-04-15 18:18:11.439586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.736 [2024-04-15 18:18:11.439618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.736 [2024-04-15 18:18:11.448748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.736 [2024-04-15 18:18:11.449157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.736 [2024-04-15 18:18:11.449189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.736 [2024-04-15 18:18:11.460298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.736 [2024-04-15 18:18:11.460709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.736 [2024-04-15 18:18:11.460741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.736 [2024-04-15 18:18:11.471018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.736 [2024-04-15 18:18:11.471398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.736 [2024-04-15 18:18:11.471430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.736 [2024-04-15 18:18:11.481815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.736 [2024-04-15 18:18:11.482196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.736 [2024-04-15 18:18:11.482229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.736 [2024-04-15 18:18:11.492639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.736 [2024-04-15 18:18:11.492992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.736 [2024-04-15 18:18:11.493024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.736 [2024-04-15 18:18:11.504087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.736 [2024-04-15 18:18:11.504463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.736 [2024-04-15 18:18:11.504504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.736 [2024-04-15 18:18:11.514895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.736 [2024-04-15 18:18:11.515272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.736 [2024-04-15 18:18:11.515304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.736 [2024-04-15 18:18:11.525773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.736 [2024-04-15 18:18:11.526210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.736 [2024-04-15 18:18:11.526242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.736 [2024-04-15 18:18:11.535849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.736 [2024-04-15 18:18:11.536293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.736 [2024-04-15 18:18:11.536325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.736 [2024-04-15 18:18:11.545942] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.736 [2024-04-15 18:18:11.546128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.736 [2024-04-15 18:18:11.546159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.736 [2024-04-15 18:18:11.556037] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.736 [2024-04-15 18:18:11.556406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.736 [2024-04-15 18:18:11.556437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.736 [2024-04-15 18:18:11.565880] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.736 [2024-04-15 18:18:11.566351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.736 [2024-04-15 18:18:11.566382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.736 [2024-04-15 18:18:11.575778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.736 [2024-04-15 18:18:11.576161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.736 [2024-04-15 18:18:11.576193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.736 [2024-04-15 18:18:11.586369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.736 [2024-04-15 18:18:11.586741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.736 [2024-04-15 18:18:11.586773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.736 [2024-04-15 18:18:11.596878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.737 [2024-04-15 18:18:11.597240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.737 [2024-04-15 18:18:11.597272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.737 [2024-04-15 18:18:11.606927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.737 [2024-04-15 18:18:11.607303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.737 [2024-04-15 18:18:11.607334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.737 [2024-04-15 18:18:11.617284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.737 [2024-04-15 18:18:11.617644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.737 [2024-04-15 18:18:11.617676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.737 [2024-04-15 18:18:11.626616] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.737 [2024-04-15 18:18:11.626737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.737 [2024-04-15 18:18:11.626768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.737 [2024-04-15 18:18:11.636739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.737 [2024-04-15 18:18:11.637102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.737 [2024-04-15 18:18:11.637134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.737 [2024-04-15 18:18:11.645457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.737 [2024-04-15 18:18:11.645813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.737 [2024-04-15 18:18:11.645844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.737 [2024-04-15 18:18:11.655450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.737 [2024-04-15 18:18:11.655806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.737 [2024-04-15 18:18:11.655837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.737 [2024-04-15 18:18:11.665561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.737 [2024-04-15 18:18:11.665704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.737 [2024-04-15 18:18:11.665734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.737 [2024-04-15 18:18:11.675437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.737 [2024-04-15 18:18:11.675866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.737 [2024-04-15 18:18:11.675906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.737 [2024-04-15 18:18:11.685599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.737 [2024-04-15 18:18:11.686021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.737 [2024-04-15 18:18:11.686068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.995 [2024-04-15 18:18:11.695319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.995 [2024-04-15 18:18:11.695727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.995 [2024-04-15 18:18:11.695760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.995 [2024-04-15 18:18:11.704804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.995 [2024-04-15 18:18:11.705240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.995 [2024-04-15 18:18:11.705272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.995 [2024-04-15 18:18:11.714827] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.995 [2024-04-15 18:18:11.715299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.995 [2024-04-15 18:18:11.715342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.995 [2024-04-15 18:18:11.724150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.995 [2024-04-15 18:18:11.724583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.995 [2024-04-15 18:18:11.724614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.995 [2024-04-15 18:18:11.734526] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.995 [2024-04-15 18:18:11.734938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.995 [2024-04-15 18:18:11.734969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.995 [2024-04-15 18:18:11.744475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.995 [2024-04-15 18:18:11.744846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.995 [2024-04-15 18:18:11.744877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.754344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.754714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.754747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.764090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.764461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.764494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.775480] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.775871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.775904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.786421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.786792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.786824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.796915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.797282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.797314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.807186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.807615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.807646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.816957] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.817404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.817435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.827684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.828056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.828095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.836959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.837366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.837397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.847035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.847417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.847449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.857563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.858004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.858035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.867528] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.867973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.868004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.876663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.877121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.877153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.886520] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.886890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.886922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.897453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.897811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.897842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.908098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.908518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.908549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.917807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.918187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.918219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.927790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.928158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.928189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.996 [2024-04-15 18:18:11.937376] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:22.996 [2024-04-15 18:18:11.937766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.996 [2024-04-15 18:18:11.937807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:11.948697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:11.949085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:11.949119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:11.958628] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:11.959100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:11.959144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:11.968921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:11.969304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:11.969336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:11.978620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:11.978990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:11.979022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:11.989167] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:11.989539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:11.989571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:11.999505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:11.999859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:11.999890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.009927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.010346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:12.010377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.020139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.020493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:12.020525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.029456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.029818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:12.029849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.038956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.039314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:12.039345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.049186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.049559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:12.049590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.058895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.059315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:12.059346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.068887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.069247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:12.069296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.078199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.078571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:12.078603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.088436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.088887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:12.088918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.098342] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.098807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:12.098837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.108298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.108659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:12.108691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.117846] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.118279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:12.118311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.127923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.128300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:12.128331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.137701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.138066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:12.138097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.148115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.148490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:12.148522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.157537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.157679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:12.157710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.166147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.166487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:12.166518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.174830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.175269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.255 [2024-04-15 18:18:12.175300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.255 [2024-04-15 18:18:12.183982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.255 [2024-04-15 18:18:12.184399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.256 [2024-04-15 18:18:12.184431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.256 [2024-04-15 18:18:12.193337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.256 [2024-04-15 18:18:12.193762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.256 [2024-04-15 18:18:12.193800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.256 [2024-04-15 18:18:12.202774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.256 [2024-04-15 18:18:12.203126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.256 [2024-04-15 18:18:12.203157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.513 [2024-04-15 18:18:12.212083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.513 [2024-04-15 18:18:12.212431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.513 [2024-04-15 18:18:12.212464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.513 [2024-04-15 18:18:12.220848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.513 [2024-04-15 18:18:12.221198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.513 [2024-04-15 18:18:12.221230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.513 [2024-04-15 18:18:12.230050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.513 [2024-04-15 18:18:12.230400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.513 [2024-04-15 18:18:12.230432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.513 [2024-04-15 18:18:12.239934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.513 [2024-04-15 18:18:12.240290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.513 [2024-04-15 18:18:12.240321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.513 [2024-04-15 18:18:12.248950] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.513 [2024-04-15 18:18:12.249293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.513 [2024-04-15 18:18:12.249325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.513 [2024-04-15 18:18:12.257621] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.513 [2024-04-15 18:18:12.257962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.513 [2024-04-15 18:18:12.257993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.513 [2024-04-15 18:18:12.267185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.513 [2024-04-15 18:18:12.267574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.513 [2024-04-15 18:18:12.267605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.513 [2024-04-15 18:18:12.276048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.513 [2024-04-15 18:18:12.276407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.513 [2024-04-15 18:18:12.276438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.513 [2024-04-15 18:18:12.285385] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.513 [2024-04-15 18:18:12.285779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.513 [2024-04-15 18:18:12.285810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.513 [2024-04-15 18:18:12.294500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.513 [2024-04-15 18:18:12.294839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.513 [2024-04-15 18:18:12.294871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.513 [2024-04-15 18:18:12.303908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.513 [2024-04-15 18:18:12.304256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.513 [2024-04-15 18:18:12.304287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.513 [2024-04-15 18:18:12.313044] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.513 [2024-04-15 18:18:12.313409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.513 [2024-04-15 18:18:12.313440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.514 [2024-04-15 18:18:12.322591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.514 [2024-04-15 18:18:12.322941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.514 [2024-04-15 18:18:12.322972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.514 [2024-04-15 18:18:12.332079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.514 [2024-04-15 18:18:12.332448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.514 [2024-04-15 18:18:12.332480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.514 [2024-04-15 18:18:12.340941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.514 [2024-04-15 18:18:12.341325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.514 [2024-04-15 18:18:12.341357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.514 [2024-04-15 18:18:12.350305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.514 [2024-04-15 18:18:12.350652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.514 [2024-04-15 18:18:12.350683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.514 [2024-04-15 18:18:12.359771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.514 [2024-04-15 18:18:12.360116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.514 [2024-04-15 18:18:12.360148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.514 [2024-04-15 18:18:12.368778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.514 [2024-04-15 18:18:12.369156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.514 [2024-04-15 18:18:12.369187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.514 [2024-04-15 18:18:12.377798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.514 [2024-04-15 18:18:12.378256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.514 [2024-04-15 18:18:12.378287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.514 [2024-04-15 18:18:12.387170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.514 [2024-04-15 18:18:12.387506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.514 [2024-04-15 18:18:12.387537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.514 [2024-04-15 18:18:12.396265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.514 [2024-04-15 18:18:12.396602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.514 [2024-04-15 18:18:12.396634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.514 [2024-04-15 18:18:12.405499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.514 [2024-04-15 18:18:12.405837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.514 [2024-04-15 18:18:12.405869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.514 [2024-04-15 18:18:12.414787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.514 [2024-04-15 18:18:12.415149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.514 [2024-04-15 18:18:12.415181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.514 [2024-04-15 18:18:12.423893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.514 [2024-04-15 18:18:12.424238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.514 [2024-04-15 18:18:12.424270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.514 [2024-04-15 18:18:12.433157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.514 [2024-04-15 18:18:12.433518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.514 [2024-04-15 18:18:12.433549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.514 [2024-04-15 18:18:12.442748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.514 [2024-04-15 18:18:12.443116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.514 [2024-04-15 18:18:12.443147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.514 [2024-04-15 18:18:12.452210] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.514 [2024-04-15 18:18:12.452568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.514 [2024-04-15 18:18:12.452600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.514 [2024-04-15 18:18:12.461092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.514 [2024-04-15 18:18:12.461438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.514 [2024-04-15 18:18:12.461469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.771 [2024-04-15 18:18:12.470220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.771 [2024-04-15 18:18:12.470563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.470600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.479343] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.479720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.479752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.488789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.489133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.489165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.498476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.498824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.498855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.507439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.507844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.507875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.517340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.517680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.517711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.526823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.527222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.527253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.535436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.535771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.535801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.545265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.545604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.545634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.554580] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.554916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.554947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.563781] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.564126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.564157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.572511] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.572874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.572904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.581815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.582160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.582191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.590956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.591359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.591399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.600787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.601155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.601186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.609991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.610359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.610391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.619611] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.619951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.619989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.628497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.628882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.628912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.637695] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.638035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.638074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.647003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.647367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.647399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.656392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.656783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.656813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.665461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.665809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.665840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.674294] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.674641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.674671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.682973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.683313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.683343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.691968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.692352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.692383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.701802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.702178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.702209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.710587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.710923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.772 [2024-04-15 18:18:12.710954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.772 [2024-04-15 18:18:12.719400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:23.772 [2024-04-15 18:18:12.719736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.773 [2024-04-15 18:18:12.719768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.728080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.728440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.728477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.736862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.737217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.737249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.746129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.746472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.746503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.755095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.755495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.755526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.764106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.764446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.764477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.773040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.773385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.773416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.782178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.782648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.782678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.792560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.792955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.792985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.802445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.802948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.802986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.812623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.813072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.813103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.822587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.823090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.823122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.832872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.833230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.833272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.842478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.842857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.842887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.852804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.853263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.853295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.863325] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.863711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.863745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.873476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.873936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.873967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.883531] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.883869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.883901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.892560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.892900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.892931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.901463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.901821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.901853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.910274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.910749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.910780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.920113] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.920549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.920581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.930346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.930778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.930808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.941103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.941578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.941611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.951312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.951685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.951716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.961690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.962190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.962221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.971288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.971690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.971722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.031 [2024-04-15 18:18:12.981118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.031 [2024-04-15 18:18:12.981458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.031 [2024-04-15 18:18:12.981491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.291 [2024-04-15 18:18:12.990534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.291 [2024-04-15 18:18:12.990896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.291 [2024-04-15 18:18:12.990930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.291 [2024-04-15 18:18:12.999663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.291 [2024-04-15 18:18:13.000069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.291 [2024-04-15 18:18:13.000101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.291 [2024-04-15 18:18:13.009443] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.291 [2024-04-15 18:18:13.009900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.291 [2024-04-15 18:18:13.009931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.291 [2024-04-15 18:18:13.019322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.291 [2024-04-15 18:18:13.019711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.291 [2024-04-15 18:18:13.019743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.291 [2024-04-15 18:18:13.028543] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.291 [2024-04-15 18:18:13.028898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.291 [2024-04-15 18:18:13.028929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.291 [2024-04-15 18:18:13.038859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.291 [2024-04-15 18:18:13.039169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.291 [2024-04-15 18:18:13.039201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.291 [2024-04-15 18:18:13.048720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.292 [2024-04-15 18:18:13.049088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.292 [2024-04-15 18:18:13.049119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.292 [2024-04-15 18:18:13.058250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.292 [2024-04-15 18:18:13.058651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.292 [2024-04-15 18:18:13.058682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.292 [2024-04-15 18:18:13.068280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.292 [2024-04-15 18:18:13.068692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.292 [2024-04-15 18:18:13.068722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.292 [2024-04-15 18:18:13.077706] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.292 [2024-04-15 18:18:13.078156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.292 [2024-04-15 18:18:13.078187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.292 [2024-04-15 18:18:13.086788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.292 [2024-04-15 18:18:13.087173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.292 [2024-04-15 18:18:13.087218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.292 [2024-04-15 18:18:13.096349] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.292 [2024-04-15 18:18:13.096652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.292 [2024-04-15 18:18:13.096684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.292 [2024-04-15 18:18:13.104979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.292 [2024-04-15 18:18:13.105378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.292 [2024-04-15 18:18:13.105409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.292 [2024-04-15 18:18:13.114152] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.292 [2024-04-15 18:18:13.114490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.292 [2024-04-15 18:18:13.114521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.292 [2024-04-15 18:18:13.122606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.292 [2024-04-15 18:18:13.122947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.292 [2024-04-15 18:18:13.122978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.292 [2024-04-15 18:18:13.131190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.292 [2024-04-15 18:18:13.131478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.292 [2024-04-15 18:18:13.131509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.292 [2024-04-15 18:18:13.139898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.292 [2024-04-15 18:18:13.140210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.292 [2024-04-15 18:18:13.140242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.292 [2024-04-15 18:18:13.149195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821910) with pdu=0x2000190fef90 00:31:24.292 [2024-04-15 18:18:13.149582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.292 [2024-04-15 18:18:13.149613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.292 00:31:24.292 Latency(us) 00:31:24.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:24.292 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:24.292 nvme0n1 : 2.00 3184.26 398.03 0.00 0.00 5013.56 3786.52 13204.29 00:31:24.292 =================================================================================================================== 00:31:24.292 Total : 3184.26 398.03 0.00 0.00 5013.56 3786.52 13204.29 00:31:24.292 0 00:31:24.292 18:18:13 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:24.292 18:18:13 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:24.292 18:18:13 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:24.292 | .driver_specific 00:31:24.292 | .nvme_error 00:31:24.292 | .status_code 00:31:24.292 | .command_transient_transport_error' 00:31:24.292 18:18:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:24.860 18:18:13 -- host/digest.sh@71 -- # (( 205 > 0 )) 00:31:24.860 18:18:13 -- host/digest.sh@73 -- # killprocess 3455735 00:31:24.860 18:18:13 -- common/autotest_common.sh@936 -- # '[' -z 3455735 ']' 00:31:24.860 18:18:13 -- common/autotest_common.sh@940 -- # kill -0 3455735 00:31:24.860 18:18:13 -- common/autotest_common.sh@941 -- # uname 00:31:24.860 18:18:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:24.860 18:18:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3455735 00:31:24.860 18:18:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:24.860 18:18:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:24.860 18:18:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3455735' 00:31:24.860 killing process with pid 3455735 00:31:24.860 18:18:13 -- common/autotest_common.sh@955 -- # kill 3455735 00:31:24.860 Received shutdown signal, test time was about 2.000000 seconds 00:31:24.860 00:31:24.860 Latency(us) 00:31:24.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:24.860 =================================================================================================================== 00:31:24.860 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:24.860 18:18:13 -- common/autotest_common.sh@960 -- # wait 3455735 00:31:25.118 18:18:13 -- host/digest.sh@116 -- # killprocess 3453520 00:31:25.118 18:18:13 -- common/autotest_common.sh@936 -- # '[' -z 3453520 ']' 00:31:25.118 18:18:13 -- common/autotest_common.sh@940 -- # kill -0 3453520 00:31:25.118 18:18:13 -- common/autotest_common.sh@941 -- # uname 00:31:25.118 18:18:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:25.118 18:18:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3453520 00:31:25.118 18:18:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:25.118 18:18:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:25.118 18:18:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3453520' 00:31:25.118 killing process with pid 3453520 00:31:25.118 18:18:13 -- common/autotest_common.sh@955 -- # kill 3453520 00:31:25.118 18:18:13 -- common/autotest_common.sh@960 -- # wait 3453520 00:31:25.376 00:31:25.376 real 0m17.496s 00:31:25.376 user 0m36.411s 00:31:25.376 sys 0m4.800s 00:31:25.376 18:18:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:25.376 18:18:14 -- common/autotest_common.sh@10 -- # set +x 00:31:25.376 ************************************ 00:31:25.376 END TEST nvmf_digest_error 00:31:25.376 ************************************ 00:31:25.376 18:18:14 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:31:25.376 18:18:14 -- host/digest.sh@150 -- # nvmftestfini 00:31:25.376 18:18:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:25.376 18:18:14 -- nvmf/common.sh@117 -- # sync 00:31:25.376 18:18:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:25.376 18:18:14 -- nvmf/common.sh@120 -- # set +e 00:31:25.376 18:18:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:25.376 18:18:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:25.376 rmmod nvme_tcp 00:31:25.376 rmmod nvme_fabrics 00:31:25.376 rmmod nvme_keyring 00:31:25.376 18:18:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:25.376 18:18:14 -- nvmf/common.sh@124 -- # set -e 00:31:25.376 18:18:14 -- nvmf/common.sh@125 -- # return 0 00:31:25.376 18:18:14 -- nvmf/common.sh@478 -- # '[' -n 3453520 ']' 00:31:25.376 18:18:14 -- nvmf/common.sh@479 -- # killprocess 3453520 00:31:25.376 18:18:14 -- common/autotest_common.sh@936 -- # '[' -z 3453520 ']' 00:31:25.376 18:18:14 -- common/autotest_common.sh@940 -- # kill -0 3453520 00:31:25.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3453520) - No such process 00:31:25.376 18:18:14 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3453520 is not found' 00:31:25.376 Process with pid 3453520 is not found 00:31:25.376 18:18:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:25.376 18:18:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:25.376 18:18:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:25.376 18:18:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:25.376 18:18:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:25.376 18:18:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.376 18:18:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:25.376 18:18:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.292 18:18:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:27.551 00:31:27.551 real 0m40.912s 00:31:27.551 user 1m15.679s 00:31:27.551 sys 0m11.821s 00:31:27.551 18:18:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:27.551 18:18:16 -- common/autotest_common.sh@10 -- # set +x 00:31:27.551 ************************************ 00:31:27.551 END TEST nvmf_digest 00:31:27.551 ************************************ 00:31:27.551 18:18:16 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:31:27.551 18:18:16 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:31:27.551 18:18:16 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:31:27.551 18:18:16 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:27.551 18:18:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:27.551 18:18:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:27.551 18:18:16 -- common/autotest_common.sh@10 -- # set +x 00:31:27.551 ************************************ 00:31:27.551 START TEST nvmf_bdevperf 00:31:27.551 ************************************ 00:31:27.551 18:18:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:27.551 * Looking for test storage... 00:31:27.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:27.551 18:18:16 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.551 18:18:16 -- nvmf/common.sh@7 -- # uname -s 00:31:27.551 18:18:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.551 18:18:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.551 18:18:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.551 18:18:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.551 18:18:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.551 18:18:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.551 18:18:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.551 18:18:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.551 18:18:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.551 18:18:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.551 18:18:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:27.551 18:18:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:27.551 18:18:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.551 18:18:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.551 18:18:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.551 18:18:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.551 18:18:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.551 18:18:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.551 18:18:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.551 18:18:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.551 18:18:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.551 18:18:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.551 18:18:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.551 18:18:16 -- paths/export.sh@5 -- # export PATH 00:31:27.551 18:18:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.551 18:18:16 -- nvmf/common.sh@47 -- # : 0 00:31:27.551 18:18:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:27.551 18:18:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:27.551 18:18:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.551 18:18:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.551 18:18:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.551 18:18:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:27.551 18:18:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:27.551 18:18:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:27.551 18:18:16 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:27.551 18:18:16 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:27.551 18:18:16 -- host/bdevperf.sh@24 -- # nvmftestinit 00:31:27.551 18:18:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:27.551 18:18:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.551 18:18:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:27.551 18:18:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:27.551 18:18:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:27.551 18:18:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.551 18:18:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:27.551 18:18:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.551 18:18:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:31:27.551 18:18:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:31:27.551 18:18:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:27.551 18:18:16 -- common/autotest_common.sh@10 -- # set +x 00:31:30.085 18:18:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:30.085 18:18:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:30.085 18:18:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:30.085 18:18:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:30.085 18:18:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:30.085 18:18:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:30.085 18:18:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:30.085 18:18:18 -- nvmf/common.sh@295 -- # net_devs=() 00:31:30.085 18:18:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:30.085 18:18:18 -- nvmf/common.sh@296 -- # e810=() 00:31:30.085 18:18:18 -- nvmf/common.sh@296 -- # local -ga e810 00:31:30.085 18:18:18 -- nvmf/common.sh@297 -- # x722=() 00:31:30.085 18:18:18 -- nvmf/common.sh@297 -- # local -ga x722 00:31:30.085 18:18:18 -- nvmf/common.sh@298 -- # mlx=() 00:31:30.085 18:18:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:30.085 18:18:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:30.085 18:18:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:30.085 18:18:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:30.085 18:18:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:30.085 18:18:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:30.085 18:18:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:30.085 18:18:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:30.085 18:18:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:30.085 18:18:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:30.085 18:18:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:30.085 18:18:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:30.085 18:18:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:30.085 18:18:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:30.085 18:18:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:30.085 18:18:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:30.085 18:18:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:30.085 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:30.085 18:18:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:30.085 18:18:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:30.085 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:30.085 18:18:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:30.085 18:18:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:30.085 18:18:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.085 18:18:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:30.085 18:18:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.085 18:18:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:30.085 Found net devices under 0000:84:00.0: cvl_0_0 00:31:30.085 18:18:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.085 18:18:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:30.085 18:18:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.085 18:18:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:30.085 18:18:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.085 18:18:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:30.085 Found net devices under 0000:84:00.1: cvl_0_1 00:31:30.085 18:18:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.085 18:18:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:31:30.085 18:18:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:31:30.085 18:18:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:31:30.085 18:18:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:30.085 18:18:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:30.085 18:18:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:30.085 18:18:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:30.085 18:18:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:30.085 18:18:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:30.085 18:18:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:30.085 18:18:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:30.085 18:18:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:30.085 18:18:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:30.085 18:18:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:30.085 18:18:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:30.085 18:18:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:30.085 18:18:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:30.085 18:18:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:30.085 18:18:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:30.085 18:18:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:30.085 18:18:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:30.085 18:18:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:30.085 18:18:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:30.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:30.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:31:30.085 00:31:30.085 --- 10.0.0.2 ping statistics --- 00:31:30.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.085 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:31:30.085 18:18:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:30.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:30.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:31:30.085 00:31:30.085 --- 10.0.0.1 ping statistics --- 00:31:30.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.085 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:31:30.085 18:18:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:30.085 18:18:18 -- nvmf/common.sh@411 -- # return 0 00:31:30.085 18:18:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:31:30.085 18:18:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:30.085 18:18:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:30.085 18:18:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:30.086 18:18:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:30.086 18:18:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:30.086 18:18:18 -- host/bdevperf.sh@25 -- # tgt_init 00:31:30.086 18:18:18 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:30.086 18:18:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:30.086 18:18:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:30.086 18:18:18 -- common/autotest_common.sh@10 -- # set +x 00:31:30.086 18:18:18 -- nvmf/common.sh@470 -- # nvmfpid=3458244 00:31:30.086 18:18:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:30.086 18:18:18 -- nvmf/common.sh@471 -- # waitforlisten 3458244 00:31:30.086 18:18:18 -- common/autotest_common.sh@817 -- # '[' -z 3458244 ']' 00:31:30.086 18:18:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.086 18:18:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:30.086 18:18:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.086 18:18:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:30.086 18:18:18 -- common/autotest_common.sh@10 -- # set +x 00:31:30.086 [2024-04-15 18:18:18.893966] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:31:30.086 [2024-04-15 18:18:18.894065] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:30.086 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.086 [2024-04-15 18:18:18.974331] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:30.344 [2024-04-15 18:18:19.069216] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:30.344 [2024-04-15 18:18:19.069280] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:30.344 [2024-04-15 18:18:19.069297] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:30.344 [2024-04-15 18:18:19.069311] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:30.344 [2024-04-15 18:18:19.069324] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:30.344 [2024-04-15 18:18:19.069419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:30.344 [2024-04-15 18:18:19.069687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:30.344 [2024-04-15 18:18:19.069691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.344 18:18:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:30.344 18:18:19 -- common/autotest_common.sh@850 -- # return 0 00:31:30.344 18:18:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:30.344 18:18:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:30.344 18:18:19 -- common/autotest_common.sh@10 -- # set +x 00:31:30.344 18:18:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.344 18:18:19 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:30.344 18:18:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.344 18:18:19 -- common/autotest_common.sh@10 -- # set +x 00:31:30.344 [2024-04-15 18:18:19.211533] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.344 18:18:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.344 18:18:19 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:30.344 18:18:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.344 18:18:19 -- common/autotest_common.sh@10 -- # set +x 00:31:30.344 Malloc0 00:31:30.344 18:18:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.344 18:18:19 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:30.344 18:18:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.344 18:18:19 -- common/autotest_common.sh@10 -- # set +x 00:31:30.344 18:18:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.345 18:18:19 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:30.345 18:18:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.345 18:18:19 -- common/autotest_common.sh@10 -- # set +x 00:31:30.345 18:18:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.345 18:18:19 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.345 18:18:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.345 18:18:19 -- common/autotest_common.sh@10 -- # set +x 00:31:30.345 [2024-04-15 18:18:19.268871] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.345 18:18:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.345 18:18:19 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:31:30.345 18:18:19 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:31:30.345 18:18:19 -- nvmf/common.sh@521 -- # config=() 00:31:30.345 18:18:19 -- nvmf/common.sh@521 -- # local subsystem config 00:31:30.345 18:18:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:30.345 18:18:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:30.345 { 00:31:30.345 "params": { 00:31:30.345 "name": "Nvme$subsystem", 00:31:30.345 "trtype": "$TEST_TRANSPORT", 00:31:30.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.345 "adrfam": "ipv4", 00:31:30.345 "trsvcid": "$NVMF_PORT", 00:31:30.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.345 "hdgst": ${hdgst:-false}, 00:31:30.345 "ddgst": ${ddgst:-false} 00:31:30.345 }, 00:31:30.345 "method": "bdev_nvme_attach_controller" 00:31:30.345 } 00:31:30.345 EOF 00:31:30.345 )") 00:31:30.345 18:18:19 -- nvmf/common.sh@543 -- # cat 00:31:30.345 18:18:19 -- nvmf/common.sh@545 -- # jq . 00:31:30.345 18:18:19 -- nvmf/common.sh@546 -- # IFS=, 00:31:30.345 18:18:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:30.345 "params": { 00:31:30.345 "name": "Nvme1", 00:31:30.345 "trtype": "tcp", 00:31:30.345 "traddr": "10.0.0.2", 00:31:30.345 "adrfam": "ipv4", 00:31:30.345 "trsvcid": "4420", 00:31:30.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:30.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:30.345 "hdgst": false, 00:31:30.345 "ddgst": false 00:31:30.345 }, 00:31:30.345 "method": "bdev_nvme_attach_controller" 00:31:30.345 }' 00:31:30.604 [2024-04-15 18:18:19.316203] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:31:30.604 [2024-04-15 18:18:19.316289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3458277 ] 00:31:30.604 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.604 [2024-04-15 18:18:19.382384] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.604 [2024-04-15 18:18:19.471003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.604 [2024-04-15 18:18:19.479749] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:31:30.862 Running I/O for 1 seconds... 00:31:31.798 00:31:31.798 Latency(us) 00:31:31.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:31.798 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:31.798 Verification LBA range: start 0x0 length 0x4000 00:31:31.798 Nvme1n1 : 1.01 8498.63 33.20 0.00 0.00 15002.18 2172.40 15534.46 00:31:31.798 =================================================================================================================== 00:31:31.798 Total : 8498.63 33.20 0.00 0.00 15002.18 2172.40 15534.46 00:31:32.056 18:18:20 -- host/bdevperf.sh@30 -- # bdevperfpid=3458464 00:31:32.056 18:18:20 -- host/bdevperf.sh@32 -- # sleep 3 00:31:32.056 18:18:20 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:31:32.057 18:18:20 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:31:32.057 18:18:20 -- nvmf/common.sh@521 -- # config=() 00:31:32.057 18:18:20 -- nvmf/common.sh@521 -- # local subsystem config 00:31:32.057 18:18:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:32.057 18:18:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:32.057 { 00:31:32.057 "params": { 00:31:32.057 "name": "Nvme$subsystem", 00:31:32.057 "trtype": "$TEST_TRANSPORT", 00:31:32.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:32.057 "adrfam": "ipv4", 00:31:32.057 "trsvcid": "$NVMF_PORT", 00:31:32.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:32.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:32.057 "hdgst": ${hdgst:-false}, 00:31:32.057 "ddgst": ${ddgst:-false} 00:31:32.057 }, 00:31:32.057 "method": "bdev_nvme_attach_controller" 00:31:32.057 } 00:31:32.057 EOF 00:31:32.057 )") 00:31:32.057 18:18:20 -- nvmf/common.sh@543 -- # cat 00:31:32.057 18:18:20 -- nvmf/common.sh@545 -- # jq . 00:31:32.057 18:18:20 -- nvmf/common.sh@546 -- # IFS=, 00:31:32.057 18:18:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:32.057 "params": { 00:31:32.057 "name": "Nvme1", 00:31:32.057 "trtype": "tcp", 00:31:32.057 "traddr": "10.0.0.2", 00:31:32.057 "adrfam": "ipv4", 00:31:32.057 "trsvcid": "4420", 00:31:32.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:32.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:32.057 "hdgst": false, 00:31:32.057 "ddgst": false 00:31:32.057 }, 00:31:32.057 "method": "bdev_nvme_attach_controller" 00:31:32.057 }' 00:31:32.057 [2024-04-15 18:18:20.956999] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:31:32.057 [2024-04-15 18:18:20.957120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3458464 ] 00:31:32.057 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.315 [2024-04-15 18:18:21.020430] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.315 [2024-04-15 18:18:21.103841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.315 [2024-04-15 18:18:21.112636] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:31:32.573 Running I/O for 15 seconds... 00:31:35.112 18:18:23 -- host/bdevperf.sh@33 -- # kill -9 3458244 00:31:35.112 18:18:23 -- host/bdevperf.sh@35 -- # sleep 3 00:31:35.112 [2024-04-15 18:18:23.925607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.112 [2024-04-15 18:18:23.925664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.925702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.112 [2024-04-15 18:18:23.925724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.925745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.112 [2024-04-15 18:18:23.925764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.925794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.112 [2024-04-15 18:18:23.925812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.925831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.112 [2024-04-15 18:18:23.925847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.925866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.112 [2024-04-15 18:18:23.925883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.925901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.112 [2024-04-15 18:18:23.925918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.925937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.112 [2024-04-15 18:18:23.925955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.925977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.112 [2024-04-15 18:18:23.925997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.926016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.112 [2024-04-15 18:18:23.926033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.926051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.112 [2024-04-15 18:18:23.926077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.926114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.112 [2024-04-15 18:18:23.926134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.926151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.112 [2024-04-15 18:18:23.926165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.926182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.112 [2024-04-15 18:18:23.926196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.926212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.112 [2024-04-15 18:18:23.926227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.926242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.112 [2024-04-15 18:18:23.926263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.926281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.112 [2024-04-15 18:18:23.926295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.926312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.112 [2024-04-15 18:18:23.926326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.926367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.112 [2024-04-15 18:18:23.926384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.112 [2024-04-15 18:18:23.926401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.926417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.926435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.926452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.926469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.926485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.926503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.926519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.926536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.926552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.926569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.926584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.926602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.926618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.926635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.926650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.926668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.926683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.926705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.926722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.926739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.926755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.926772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.926788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.926805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.926821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.926838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.926853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.926871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.926887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.926904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.926919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.926937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.926952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.926971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.926988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.927005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.927020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.927039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.927055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.927082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.927099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.927130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.927145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.927166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.927181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.927197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.927211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.927226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.927241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.927257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.927271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.927287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.927301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.927328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.927359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.927375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.927388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.927421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.927437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.927455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.927471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.927489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.927505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.927523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.927538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.927556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.927573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.927592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.927612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.113 [2024-04-15 18:18:23.927632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.113 [2024-04-15 18:18:23.927648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.927666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.927682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.927700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.927717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.927736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.927752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.927770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.927786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.927803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.927819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.927837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.927853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.927870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.927886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.927904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.927920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.927938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.927953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.927971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.927987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.114 [2024-04-15 18:18:23.928624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.114 [2024-04-15 18:18:23.928657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.114 [2024-04-15 18:18:23.928691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.114 [2024-04-15 18:18:23.928725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.114 [2024-04-15 18:18:23.928758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.114 [2024-04-15 18:18:23.928791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.114 [2024-04-15 18:18:23.928873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.114 [2024-04-15 18:18:23.928889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.928910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.928930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.928947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.928963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.928980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.928996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.929977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.929995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.930011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.930029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.930044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.930068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.930086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.930119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.930134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.930149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.115 [2024-04-15 18:18:23.930163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.115 [2024-04-15 18:18:23.930179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19140 is same with the state(5) to be set 00:31:35.116 [2024-04-15 18:18:23.930196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:35.116 [2024-04-15 18:18:23.930208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:35.116 [2024-04-15 18:18:23.930220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49104 len:8 PRP1 0x0 PRP2 0x0 00:31:35.116 [2024-04-15 18:18:23.930234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.116 [2024-04-15 18:18:23.930303] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d19140 was disconnected and freed. reset controller. 00:31:35.116 [2024-04-15 18:18:23.934171] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.116 [2024-04-15 18:18:23.934243] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.116 [2024-04-15 18:18:23.935076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-04-15 18:18:23.935293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-04-15 18:18:23.935320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.116 [2024-04-15 18:18:23.935337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.116 [2024-04-15 18:18:23.935590] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.116 [2024-04-15 18:18:23.935837] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.116 [2024-04-15 18:18:23.935863] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.116 [2024-04-15 18:18:23.935883] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.116 [2024-04-15 18:18:23.939476] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.116 [2024-04-15 18:18:23.948368] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.116 [2024-04-15 18:18:23.948900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-04-15 18:18:23.949151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-04-15 18:18:23.949179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.116 [2024-04-15 18:18:23.949195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.116 [2024-04-15 18:18:23.949432] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.116 [2024-04-15 18:18:23.949675] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.116 [2024-04-15 18:18:23.949699] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.116 [2024-04-15 18:18:23.949715] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.116 [2024-04-15 18:18:23.953292] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.116 [2024-04-15 18:18:23.962305] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.116 [2024-04-15 18:18:23.962825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-04-15 18:18:23.963052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-04-15 18:18:23.963103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.116 [2024-04-15 18:18:23.963119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.116 [2024-04-15 18:18:23.963326] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.116 [2024-04-15 18:18:23.963583] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.116 [2024-04-15 18:18:23.963607] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.116 [2024-04-15 18:18:23.963629] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.116 [2024-04-15 18:18:23.967178] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.116 [2024-04-15 18:18:23.976166] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.116 [2024-04-15 18:18:23.976665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-04-15 18:18:23.976867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-04-15 18:18:23.976902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.116 [2024-04-15 18:18:23.976920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.116 [2024-04-15 18:18:23.977181] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.116 [2024-04-15 18:18:23.977418] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.116 [2024-04-15 18:18:23.977443] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.116 [2024-04-15 18:18:23.977458] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.116 [2024-04-15 18:18:23.981031] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.116 [2024-04-15 18:18:23.990081] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.116 [2024-04-15 18:18:23.990543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-04-15 18:18:23.990784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-04-15 18:18:23.990813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.116 [2024-04-15 18:18:23.990830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.116 [2024-04-15 18:18:23.991082] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.116 [2024-04-15 18:18:23.991324] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.116 [2024-04-15 18:18:23.991348] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.116 [2024-04-15 18:18:23.991363] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.116 [2024-04-15 18:18:23.994914] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.116 [2024-04-15 18:18:24.003961] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.116 [2024-04-15 18:18:24.004421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-04-15 18:18:24.004583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-04-15 18:18:24.004612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.116 [2024-04-15 18:18:24.004630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.116 [2024-04-15 18:18:24.004867] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.116 [2024-04-15 18:18:24.005122] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.116 [2024-04-15 18:18:24.005146] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.116 [2024-04-15 18:18:24.005162] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.116 [2024-04-15 18:18:24.008722] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.116 [2024-04-15 18:18:24.017954] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.116 [2024-04-15 18:18:24.018457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-04-15 18:18:24.018747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-04-15 18:18:24.018776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.116 [2024-04-15 18:18:24.018794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.116 [2024-04-15 18:18:24.019031] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.116 [2024-04-15 18:18:24.019284] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.116 [2024-04-15 18:18:24.019308] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.116 [2024-04-15 18:18:24.019324] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.116 [2024-04-15 18:18:24.022877] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.116 [2024-04-15 18:18:24.031891] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.116 [2024-04-15 18:18:24.032458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-04-15 18:18:24.032734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-04-15 18:18:24.032763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.116 [2024-04-15 18:18:24.032781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.116 [2024-04-15 18:18:24.033017] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.116 [2024-04-15 18:18:24.033269] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.116 [2024-04-15 18:18:24.033293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.116 [2024-04-15 18:18:24.033309] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.117 [2024-04-15 18:18:24.036861] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.117 [2024-04-15 18:18:24.045875] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.117 [2024-04-15 18:18:24.046503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-04-15 18:18:24.046804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-04-15 18:18:24.046835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.117 [2024-04-15 18:18:24.046854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.117 [2024-04-15 18:18:24.047109] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.117 [2024-04-15 18:18:24.047355] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.117 [2024-04-15 18:18:24.047379] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.117 [2024-04-15 18:18:24.047394] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.117 [2024-04-15 18:18:24.050948] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.117 [2024-04-15 18:18:24.059808] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.117 [2024-04-15 18:18:24.060263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-04-15 18:18:24.060468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-04-15 18:18:24.060499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.117 [2024-04-15 18:18:24.060517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.117 [2024-04-15 18:18:24.060754] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.117 [2024-04-15 18:18:24.060997] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.117 [2024-04-15 18:18:24.061020] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.117 [2024-04-15 18:18:24.061037] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.377 [2024-04-15 18:18:24.064643] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.377 [2024-04-15 18:18:24.073699] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.377 [2024-04-15 18:18:24.074178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.377 [2024-04-15 18:18:24.074368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.377 [2024-04-15 18:18:24.074397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.378 [2024-04-15 18:18:24.074414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.378 [2024-04-15 18:18:24.074652] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.378 [2024-04-15 18:18:24.074894] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.378 [2024-04-15 18:18:24.074917] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.378 [2024-04-15 18:18:24.074933] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.378 [2024-04-15 18:18:24.078498] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.378 [2024-04-15 18:18:24.087515] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.378 [2024-04-15 18:18:24.088118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.378 [2024-04-15 18:18:24.088374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.378 [2024-04-15 18:18:24.088405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.378 [2024-04-15 18:18:24.088423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.378 [2024-04-15 18:18:24.088667] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.378 [2024-04-15 18:18:24.088911] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.378 [2024-04-15 18:18:24.088935] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.378 [2024-04-15 18:18:24.088952] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.378 [2024-04-15 18:18:24.092515] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.378 [2024-04-15 18:18:24.101529] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.378 [2024-04-15 18:18:24.102210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.378 [2024-04-15 18:18:24.102503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.378 [2024-04-15 18:18:24.102534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.378 [2024-04-15 18:18:24.102553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.378 [2024-04-15 18:18:24.102797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.378 [2024-04-15 18:18:24.103041] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.378 [2024-04-15 18:18:24.103079] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.378 [2024-04-15 18:18:24.103097] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.378 [2024-04-15 18:18:24.106654] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.378 [2024-04-15 18:18:24.115466] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.378 [2024-04-15 18:18:24.115968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.378 [2024-04-15 18:18:24.116252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.378 [2024-04-15 18:18:24.116283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.378 [2024-04-15 18:18:24.116301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.378 [2024-04-15 18:18:24.116539] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.378 [2024-04-15 18:18:24.116781] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.378 [2024-04-15 18:18:24.116805] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.378 [2024-04-15 18:18:24.116821] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.378 [2024-04-15 18:18:24.120384] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.378 [2024-04-15 18:18:24.129399] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.378 [2024-04-15 18:18:24.129892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.378 [2024-04-15 18:18:24.130114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.378 [2024-04-15 18:18:24.130144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.378 [2024-04-15 18:18:24.130161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.378 [2024-04-15 18:18:24.130399] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.378 [2024-04-15 18:18:24.130641] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.378 [2024-04-15 18:18:24.130664] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.378 [2024-04-15 18:18:24.130680] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.378 [2024-04-15 18:18:24.134240] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.378 [2024-04-15 18:18:24.143267] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.378 [2024-04-15 18:18:24.143893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.378 [2024-04-15 18:18:24.144258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.378 [2024-04-15 18:18:24.144296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.378 [2024-04-15 18:18:24.144315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.378 [2024-04-15 18:18:24.144560] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.378 [2024-04-15 18:18:24.144803] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.378 [2024-04-15 18:18:24.144827] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.378 [2024-04-15 18:18:24.144843] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.378 [2024-04-15 18:18:24.148407] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.378 [2024-04-15 18:18:24.157222] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.378 [2024-04-15 18:18:24.157847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.378 [2024-04-15 18:18:24.158126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.378 [2024-04-15 18:18:24.158158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.378 [2024-04-15 18:18:24.158176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.378 [2024-04-15 18:18:24.158420] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.378 [2024-04-15 18:18:24.158664] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.378 [2024-04-15 18:18:24.158688] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.378 [2024-04-15 18:18:24.158704] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.378 [2024-04-15 18:18:24.162267] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.378 [2024-04-15 18:18:24.171082] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.378 [2024-04-15 18:18:24.171610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.378 [2024-04-15 18:18:24.171886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.378 [2024-04-15 18:18:24.171915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.378 [2024-04-15 18:18:24.171933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.378 [2024-04-15 18:18:24.172184] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.378 [2024-04-15 18:18:24.172427] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.378 [2024-04-15 18:18:24.172450] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.378 [2024-04-15 18:18:24.172466] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.378 [2024-04-15 18:18:24.176017] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.378 [2024-04-15 18:18:24.185255] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.378 [2024-04-15 18:18:24.185691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.378 [2024-04-15 18:18:24.185888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.378 [2024-04-15 18:18:24.185917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.378 [2024-04-15 18:18:24.185941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.378 [2024-04-15 18:18:24.186207] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.378 [2024-04-15 18:18:24.186475] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.378 [2024-04-15 18:18:24.186501] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.378 [2024-04-15 18:18:24.186517] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.378 [2024-04-15 18:18:24.190209] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.378 [2024-04-15 18:18:24.199302] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.379 [2024-04-15 18:18:24.199924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.379 [2024-04-15 18:18:24.200206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.379 [2024-04-15 18:18:24.200239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.379 [2024-04-15 18:18:24.200258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.379 [2024-04-15 18:18:24.200502] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.379 [2024-04-15 18:18:24.200745] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.379 [2024-04-15 18:18:24.200769] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.379 [2024-04-15 18:18:24.200785] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.379 [2024-04-15 18:18:24.204350] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.379 [2024-04-15 18:18:24.213145] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.379 [2024-04-15 18:18:24.213657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.379 [2024-04-15 18:18:24.213854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.379 [2024-04-15 18:18:24.213883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.379 [2024-04-15 18:18:24.213901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.379 [2024-04-15 18:18:24.214150] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.379 [2024-04-15 18:18:24.214392] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.379 [2024-04-15 18:18:24.214416] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.379 [2024-04-15 18:18:24.214432] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.379 [2024-04-15 18:18:24.217978] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.379 [2024-04-15 18:18:24.226994] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.379 [2024-04-15 18:18:24.227496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.379 [2024-04-15 18:18:24.227735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.379 [2024-04-15 18:18:24.227764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.379 [2024-04-15 18:18:24.227781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.379 [2024-04-15 18:18:24.228025] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.379 [2024-04-15 18:18:24.228279] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.379 [2024-04-15 18:18:24.228304] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.379 [2024-04-15 18:18:24.228320] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.379 [2024-04-15 18:18:24.231870] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.379 [2024-04-15 18:18:24.240880] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.379 [2024-04-15 18:18:24.241500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.379 [2024-04-15 18:18:24.241825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.379 [2024-04-15 18:18:24.241856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.379 [2024-04-15 18:18:24.241874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.379 [2024-04-15 18:18:24.242133] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.379 [2024-04-15 18:18:24.242377] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.379 [2024-04-15 18:18:24.242402] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.379 [2024-04-15 18:18:24.242417] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.379 [2024-04-15 18:18:24.245971] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.379 [2024-04-15 18:18:24.254775] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.379 [2024-04-15 18:18:24.255341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.379 [2024-04-15 18:18:24.255651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.379 [2024-04-15 18:18:24.255682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.379 [2024-04-15 18:18:24.255701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.379 [2024-04-15 18:18:24.255945] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.379 [2024-04-15 18:18:24.256203] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.379 [2024-04-15 18:18:24.256229] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.379 [2024-04-15 18:18:24.256245] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.379 [2024-04-15 18:18:24.259797] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.379 [2024-04-15 18:18:24.268607] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.379 [2024-04-15 18:18:24.269103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.379 [2024-04-15 18:18:24.269327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.379 [2024-04-15 18:18:24.269356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.379 [2024-04-15 18:18:24.269374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.379 [2024-04-15 18:18:24.269612] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.379 [2024-04-15 18:18:24.269861] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.379 [2024-04-15 18:18:24.269885] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.379 [2024-04-15 18:18:24.269901] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.379 [2024-04-15 18:18:24.273465] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.379 [2024-04-15 18:18:24.282482] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.379 [2024-04-15 18:18:24.282977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.379 [2024-04-15 18:18:24.283223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.379 [2024-04-15 18:18:24.283253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.379 [2024-04-15 18:18:24.283271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.379 [2024-04-15 18:18:24.283509] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.379 [2024-04-15 18:18:24.283751] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.379 [2024-04-15 18:18:24.283775] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.379 [2024-04-15 18:18:24.283791] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.379 [2024-04-15 18:18:24.287350] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.379 [2024-04-15 18:18:24.296364] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.379 [2024-04-15 18:18:24.296827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.379 [2024-04-15 18:18:24.297019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.379 [2024-04-15 18:18:24.297048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.379 [2024-04-15 18:18:24.297078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.379 [2024-04-15 18:18:24.297317] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.379 [2024-04-15 18:18:24.297558] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.379 [2024-04-15 18:18:24.297582] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.379 [2024-04-15 18:18:24.297597] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.379 [2024-04-15 18:18:24.301154] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.379 [2024-04-15 18:18:24.310375] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.379 [2024-04-15 18:18:24.310844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.379 [2024-04-15 18:18:24.311096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.379 [2024-04-15 18:18:24.311127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.379 [2024-04-15 18:18:24.311145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.379 [2024-04-15 18:18:24.311382] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.379 [2024-04-15 18:18:24.311624] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.380 [2024-04-15 18:18:24.311653] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.380 [2024-04-15 18:18:24.311670] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.380 [2024-04-15 18:18:24.315237] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.380 [2024-04-15 18:18:24.324262] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.380 [2024-04-15 18:18:24.324679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.380 [2024-04-15 18:18:24.324901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.380 [2024-04-15 18:18:24.324930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.380 [2024-04-15 18:18:24.324948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.380 [2024-04-15 18:18:24.325203] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.380 [2024-04-15 18:18:24.325447] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.380 [2024-04-15 18:18:24.325471] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.380 [2024-04-15 18:18:24.325486] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.380 [2024-04-15 18:18:24.329071] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.640 [2024-04-15 18:18:24.338161] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.640 [2024-04-15 18:18:24.338648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.640 [2024-04-15 18:18:24.338865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.640 [2024-04-15 18:18:24.338906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.640 [2024-04-15 18:18:24.338924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.640 [2024-04-15 18:18:24.339171] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.640 [2024-04-15 18:18:24.339414] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.640 [2024-04-15 18:18:24.339438] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.640 [2024-04-15 18:18:24.339454] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.640 [2024-04-15 18:18:24.343002] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.640 [2024-04-15 18:18:24.352010] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.640 [2024-04-15 18:18:24.352467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.640 [2024-04-15 18:18:24.352665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.640 [2024-04-15 18:18:24.352694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.640 [2024-04-15 18:18:24.352712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.640 [2024-04-15 18:18:24.352959] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.640 [2024-04-15 18:18:24.353212] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.640 [2024-04-15 18:18:24.353237] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.640 [2024-04-15 18:18:24.353258] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.640 [2024-04-15 18:18:24.356816] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.640 [2024-04-15 18:18:24.365842] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.640 [2024-04-15 18:18:24.366301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.640 [2024-04-15 18:18:24.366495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.640 [2024-04-15 18:18:24.366524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.640 [2024-04-15 18:18:24.366542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.640 [2024-04-15 18:18:24.366779] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.640 [2024-04-15 18:18:24.367021] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.640 [2024-04-15 18:18:24.367045] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.640 [2024-04-15 18:18:24.367076] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.640 [2024-04-15 18:18:24.370652] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.640 [2024-04-15 18:18:24.379683] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.640 [2024-04-15 18:18:24.380159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.640 [2024-04-15 18:18:24.380313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.640 [2024-04-15 18:18:24.380343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.640 [2024-04-15 18:18:24.380360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.640 [2024-04-15 18:18:24.380598] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.640 [2024-04-15 18:18:24.380840] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.640 [2024-04-15 18:18:24.380864] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.640 [2024-04-15 18:18:24.380879] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.640 [2024-04-15 18:18:24.384448] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.640 [2024-04-15 18:18:24.393684] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.640 [2024-04-15 18:18:24.394168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.640 [2024-04-15 18:18:24.394341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.640 [2024-04-15 18:18:24.394370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.640 [2024-04-15 18:18:24.394388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.640 [2024-04-15 18:18:24.394625] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.640 [2024-04-15 18:18:24.394867] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.640 [2024-04-15 18:18:24.394891] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.640 [2024-04-15 18:18:24.394906] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.640 [2024-04-15 18:18:24.398493] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.640 [2024-04-15 18:18:24.407514] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.640 [2024-04-15 18:18:24.408003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.640 [2024-04-15 18:18:24.408163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.640 [2024-04-15 18:18:24.408193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.640 [2024-04-15 18:18:24.408211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.640 [2024-04-15 18:18:24.408448] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.641 [2024-04-15 18:18:24.408690] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.641 [2024-04-15 18:18:24.408713] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.641 [2024-04-15 18:18:24.408728] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.641 [2024-04-15 18:18:24.412292] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.641 [2024-04-15 18:18:24.421530] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.641 [2024-04-15 18:18:24.421959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.641 [2024-04-15 18:18:24.422135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.641 [2024-04-15 18:18:24.422165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.641 [2024-04-15 18:18:24.422183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.641 [2024-04-15 18:18:24.422420] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.641 [2024-04-15 18:18:24.422662] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.641 [2024-04-15 18:18:24.422686] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.641 [2024-04-15 18:18:24.422702] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.641 [2024-04-15 18:18:24.426264] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.641 [2024-04-15 18:18:24.435597] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.641 [2024-04-15 18:18:24.436029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.641 [2024-04-15 18:18:24.436222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.641 [2024-04-15 18:18:24.436252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.641 [2024-04-15 18:18:24.436270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.641 [2024-04-15 18:18:24.436508] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.641 [2024-04-15 18:18:24.436749] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.641 [2024-04-15 18:18:24.436773] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.641 [2024-04-15 18:18:24.436789] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.641 [2024-04-15 18:18:24.440357] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.641 [2024-04-15 18:18:24.449589] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.641 [2024-04-15 18:18:24.450040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.641 [2024-04-15 18:18:24.450202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.641 [2024-04-15 18:18:24.450232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.641 [2024-04-15 18:18:24.450250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.641 [2024-04-15 18:18:24.450487] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.641 [2024-04-15 18:18:24.450729] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.641 [2024-04-15 18:18:24.450752] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.641 [2024-04-15 18:18:24.450768] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.641 [2024-04-15 18:18:24.454349] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.641 [2024-04-15 18:18:24.463583] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.641 [2024-04-15 18:18:24.464030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.641 [2024-04-15 18:18:24.464182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.641 [2024-04-15 18:18:24.464211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.641 [2024-04-15 18:18:24.464229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.641 [2024-04-15 18:18:24.464467] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.641 [2024-04-15 18:18:24.464708] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.641 [2024-04-15 18:18:24.464732] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.641 [2024-04-15 18:18:24.464747] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.641 [2024-04-15 18:18:24.468333] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.641 [2024-04-15 18:18:24.477556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.641 [2024-04-15 18:18:24.478227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.641 [2024-04-15 18:18:24.478533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.641 [2024-04-15 18:18:24.478564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.641 [2024-04-15 18:18:24.478582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.641 [2024-04-15 18:18:24.478827] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.641 [2024-04-15 18:18:24.479082] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.641 [2024-04-15 18:18:24.479107] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.641 [2024-04-15 18:18:24.479123] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.641 [2024-04-15 18:18:24.482677] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.641 [2024-04-15 18:18:24.491422] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.641 [2024-04-15 18:18:24.491887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.641 [2024-04-15 18:18:24.492055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.641 [2024-04-15 18:18:24.492093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.641 [2024-04-15 18:18:24.492111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.641 [2024-04-15 18:18:24.492349] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.641 [2024-04-15 18:18:24.492592] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.641 [2024-04-15 18:18:24.492616] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.641 [2024-04-15 18:18:24.492632] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.641 [2024-04-15 18:18:24.496194] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.641 [2024-04-15 18:18:24.505414] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.641 [2024-04-15 18:18:24.505856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.641 [2024-04-15 18:18:24.506083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.641 [2024-04-15 18:18:24.506113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.641 [2024-04-15 18:18:24.506131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.641 [2024-04-15 18:18:24.506369] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.641 [2024-04-15 18:18:24.506611] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.641 [2024-04-15 18:18:24.506635] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.641 [2024-04-15 18:18:24.506650] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.641 [2024-04-15 18:18:24.510210] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.641 [2024-04-15 18:18:24.519433] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.641 [2024-04-15 18:18:24.519865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.641 [2024-04-15 18:18:24.520073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.641 [2024-04-15 18:18:24.520103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.641 [2024-04-15 18:18:24.520121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.641 [2024-04-15 18:18:24.520359] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.641 [2024-04-15 18:18:24.520601] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.641 [2024-04-15 18:18:24.520625] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.641 [2024-04-15 18:18:24.520640] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.641 [2024-04-15 18:18:24.524204] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.641 [2024-04-15 18:18:24.533428] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.641 [2024-04-15 18:18:24.533862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.641 [2024-04-15 18:18:24.534071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.642 [2024-04-15 18:18:24.534106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.642 [2024-04-15 18:18:24.534125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.642 [2024-04-15 18:18:24.534363] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.642 [2024-04-15 18:18:24.534605] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.642 [2024-04-15 18:18:24.534629] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.642 [2024-04-15 18:18:24.534644] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.642 [2024-04-15 18:18:24.538204] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.642 [2024-04-15 18:18:24.547425] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.642 [2024-04-15 18:18:24.547854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.642 [2024-04-15 18:18:24.548054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.642 [2024-04-15 18:18:24.548094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.642 [2024-04-15 18:18:24.548112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.642 [2024-04-15 18:18:24.548350] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.642 [2024-04-15 18:18:24.548592] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.642 [2024-04-15 18:18:24.548616] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.642 [2024-04-15 18:18:24.548631] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.642 [2024-04-15 18:18:24.552184] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.642 [2024-04-15 18:18:24.561405] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.642 [2024-04-15 18:18:24.561840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.642 [2024-04-15 18:18:24.562033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.642 [2024-04-15 18:18:24.562071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.642 [2024-04-15 18:18:24.562091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.642 [2024-04-15 18:18:24.562329] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.642 [2024-04-15 18:18:24.562571] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.642 [2024-04-15 18:18:24.562594] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.642 [2024-04-15 18:18:24.562609] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.642 [2024-04-15 18:18:24.566171] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.642 [2024-04-15 18:18:24.575402] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.642 [2024-04-15 18:18:24.575836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.642 [2024-04-15 18:18:24.576011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.642 [2024-04-15 18:18:24.576040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.642 [2024-04-15 18:18:24.576079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.642 [2024-04-15 18:18:24.576319] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.642 [2024-04-15 18:18:24.576561] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.642 [2024-04-15 18:18:24.576585] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.642 [2024-04-15 18:18:24.576600] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.642 [2024-04-15 18:18:24.580160] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.642 [2024-04-15 18:18:24.589403] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.642 [2024-04-15 18:18:24.589838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.642 [2024-04-15 18:18:24.589992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.642 [2024-04-15 18:18:24.590021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.642 [2024-04-15 18:18:24.590039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.642 [2024-04-15 18:18:24.590288] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.642 [2024-04-15 18:18:24.590539] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.642 [2024-04-15 18:18:24.590563] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.642 [2024-04-15 18:18:24.590579] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.902 [2024-04-15 18:18:24.594172] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.902 [2024-04-15 18:18:24.603420] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.902 [2024-04-15 18:18:24.603842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-04-15 18:18:24.604017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-04-15 18:18:24.604045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.902 [2024-04-15 18:18:24.604075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.902 [2024-04-15 18:18:24.604315] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.902 [2024-04-15 18:18:24.604556] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.902 [2024-04-15 18:18:24.604580] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.902 [2024-04-15 18:18:24.604596] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.902 [2024-04-15 18:18:24.608153] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.902 [2024-04-15 18:18:24.617398] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.902 [2024-04-15 18:18:24.617818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-04-15 18:18:24.618032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-04-15 18:18:24.618071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.902 [2024-04-15 18:18:24.618091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.902 [2024-04-15 18:18:24.618335] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.902 [2024-04-15 18:18:24.618577] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.902 [2024-04-15 18:18:24.618601] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.902 [2024-04-15 18:18:24.618617] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.902 [2024-04-15 18:18:24.622177] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.902 [2024-04-15 18:18:24.631398] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.902 [2024-04-15 18:18:24.631851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-04-15 18:18:24.632077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-04-15 18:18:24.632106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.902 [2024-04-15 18:18:24.632124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.902 [2024-04-15 18:18:24.632362] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.902 [2024-04-15 18:18:24.632603] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.902 [2024-04-15 18:18:24.632627] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.902 [2024-04-15 18:18:24.632642] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.902 [2024-04-15 18:18:24.636203] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.902 [2024-04-15 18:18:24.645225] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.902 [2024-04-15 18:18:24.645656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-04-15 18:18:24.645860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-04-15 18:18:24.645890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.902 [2024-04-15 18:18:24.645907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.902 [2024-04-15 18:18:24.646157] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.902 [2024-04-15 18:18:24.646399] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.902 [2024-04-15 18:18:24.646423] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.902 [2024-04-15 18:18:24.646438] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.902 [2024-04-15 18:18:24.649983] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.902 [2024-04-15 18:18:24.659205] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.902 [2024-04-15 18:18:24.659649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-04-15 18:18:24.659851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-04-15 18:18:24.659880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.902 [2024-04-15 18:18:24.659897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.902 [2024-04-15 18:18:24.660147] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.902 [2024-04-15 18:18:24.660395] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.902 [2024-04-15 18:18:24.660419] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.902 [2024-04-15 18:18:24.660435] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.902 [2024-04-15 18:18:24.663981] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.902 [2024-04-15 18:18:24.673210] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.902 [2024-04-15 18:18:24.673642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-04-15 18:18:24.673837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-04-15 18:18:24.673866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.902 [2024-04-15 18:18:24.673884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.903 [2024-04-15 18:18:24.674133] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.903 [2024-04-15 18:18:24.674376] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.903 [2024-04-15 18:18:24.674399] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.903 [2024-04-15 18:18:24.674414] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.903 [2024-04-15 18:18:24.677961] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.903 [2024-04-15 18:18:24.687304] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.903 [2024-04-15 18:18:24.687759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-04-15 18:18:24.687979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-04-15 18:18:24.688008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.903 [2024-04-15 18:18:24.688026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.903 [2024-04-15 18:18:24.688272] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.903 [2024-04-15 18:18:24.688515] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.903 [2024-04-15 18:18:24.688539] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.903 [2024-04-15 18:18:24.688554] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.903 [2024-04-15 18:18:24.692112] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.903 [2024-04-15 18:18:24.701118] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.903 [2024-04-15 18:18:24.701548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-04-15 18:18:24.701697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-04-15 18:18:24.701726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.903 [2024-04-15 18:18:24.701743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.903 [2024-04-15 18:18:24.701980] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.903 [2024-04-15 18:18:24.702230] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.903 [2024-04-15 18:18:24.702261] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.903 [2024-04-15 18:18:24.702277] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.903 [2024-04-15 18:18:24.705827] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.903 [2024-04-15 18:18:24.715038] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.903 [2024-04-15 18:18:24.715449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-04-15 18:18:24.715601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-04-15 18:18:24.715630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.903 [2024-04-15 18:18:24.715648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.903 [2024-04-15 18:18:24.715886] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.903 [2024-04-15 18:18:24.716139] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.903 [2024-04-15 18:18:24.716163] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.903 [2024-04-15 18:18:24.716178] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.903 [2024-04-15 18:18:24.719723] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.903 [2024-04-15 18:18:24.728936] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.903 [2024-04-15 18:18:24.729352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-04-15 18:18:24.729551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-04-15 18:18:24.729580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.903 [2024-04-15 18:18:24.729598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.903 [2024-04-15 18:18:24.729835] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.903 [2024-04-15 18:18:24.730088] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.903 [2024-04-15 18:18:24.730113] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.903 [2024-04-15 18:18:24.730128] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.903 [2024-04-15 18:18:24.733675] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.903 [2024-04-15 18:18:24.742886] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.903 [2024-04-15 18:18:24.743298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-04-15 18:18:24.743467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-04-15 18:18:24.743496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.903 [2024-04-15 18:18:24.743514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.903 [2024-04-15 18:18:24.743751] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.903 [2024-04-15 18:18:24.743992] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.903 [2024-04-15 18:18:24.744015] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.903 [2024-04-15 18:18:24.744037] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.903 [2024-04-15 18:18:24.747594] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.903 [2024-04-15 18:18:24.756803] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.903 [2024-04-15 18:18:24.757241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-04-15 18:18:24.757449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-04-15 18:18:24.757478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.903 [2024-04-15 18:18:24.757495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.903 [2024-04-15 18:18:24.757734] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.903 [2024-04-15 18:18:24.757975] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.903 [2024-04-15 18:18:24.757999] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.903 [2024-04-15 18:18:24.758014] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.903 [2024-04-15 18:18:24.761570] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.903 [2024-04-15 18:18:24.770782] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.903 [2024-04-15 18:18:24.771224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-04-15 18:18:24.771428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-04-15 18:18:24.771457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.903 [2024-04-15 18:18:24.771474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.903 [2024-04-15 18:18:24.771712] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.903 [2024-04-15 18:18:24.771954] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.903 [2024-04-15 18:18:24.771978] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.903 [2024-04-15 18:18:24.771993] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.903 [2024-04-15 18:18:24.775559] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.903 [2024-04-15 18:18:24.784769] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.903 [2024-04-15 18:18:24.785200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-04-15 18:18:24.785407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-04-15 18:18:24.785436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.903 [2024-04-15 18:18:24.785454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.903 [2024-04-15 18:18:24.785692] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.903 [2024-04-15 18:18:24.785934] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.903 [2024-04-15 18:18:24.785958] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.903 [2024-04-15 18:18:24.785973] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.904 [2024-04-15 18:18:24.789540] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.904 [2024-04-15 18:18:24.798751] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.904 [2024-04-15 18:18:24.799182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-04-15 18:18:24.799373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-04-15 18:18:24.799402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.904 [2024-04-15 18:18:24.799420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.904 [2024-04-15 18:18:24.799657] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.904 [2024-04-15 18:18:24.799898] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.904 [2024-04-15 18:18:24.799922] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.904 [2024-04-15 18:18:24.799937] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.904 [2024-04-15 18:18:24.803494] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.904 [2024-04-15 18:18:24.812703] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.904 [2024-04-15 18:18:24.813143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-04-15 18:18:24.813325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-04-15 18:18:24.813354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.904 [2024-04-15 18:18:24.813372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.904 [2024-04-15 18:18:24.813608] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.904 [2024-04-15 18:18:24.813851] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.904 [2024-04-15 18:18:24.813875] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.904 [2024-04-15 18:18:24.813890] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.904 [2024-04-15 18:18:24.817450] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.904 [2024-04-15 18:18:24.826661] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.904 [2024-04-15 18:18:24.827096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-04-15 18:18:24.827303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-04-15 18:18:24.827332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.904 [2024-04-15 18:18:24.827349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.904 [2024-04-15 18:18:24.827587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.904 [2024-04-15 18:18:24.827828] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.904 [2024-04-15 18:18:24.827852] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.904 [2024-04-15 18:18:24.827867] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.904 [2024-04-15 18:18:24.831426] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.904 [2024-04-15 18:18:24.840638] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.904 [2024-04-15 18:18:24.841070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-04-15 18:18:24.841235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-04-15 18:18:24.841264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:35.904 [2024-04-15 18:18:24.841282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:35.904 [2024-04-15 18:18:24.841519] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:35.904 [2024-04-15 18:18:24.841761] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.904 [2024-04-15 18:18:24.841784] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.904 [2024-04-15 18:18:24.841799] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.904 [2024-04-15 18:18:24.845359] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.164 [2024-04-15 18:18:24.854614] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.164 [2024-04-15 18:18:24.855065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.164 [2024-04-15 18:18:24.855240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.164 [2024-04-15 18:18:24.855268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.164 [2024-04-15 18:18:24.855286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.164 [2024-04-15 18:18:24.855523] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.164 [2024-04-15 18:18:24.855765] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.164 [2024-04-15 18:18:24.855789] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.164 [2024-04-15 18:18:24.855804] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.164 [2024-04-15 18:18:24.859366] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.164 [2024-04-15 18:18:24.868603] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.164 [2024-04-15 18:18:24.869047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.164 [2024-04-15 18:18:24.869238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.164 [2024-04-15 18:18:24.869267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.164 [2024-04-15 18:18:24.869285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.164 [2024-04-15 18:18:24.869523] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.164 [2024-04-15 18:18:24.869764] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.164 [2024-04-15 18:18:24.869788] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.164 [2024-04-15 18:18:24.869804] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.164 [2024-04-15 18:18:24.873376] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.164 [2024-04-15 18:18:24.882593] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.164 [2024-04-15 18:18:24.883020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.164 [2024-04-15 18:18:24.883233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.164 [2024-04-15 18:18:24.883262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.164 [2024-04-15 18:18:24.883280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.164 [2024-04-15 18:18:24.883518] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.164 [2024-04-15 18:18:24.883759] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.164 [2024-04-15 18:18:24.883783] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.164 [2024-04-15 18:18:24.883799] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.164 [2024-04-15 18:18:24.887355] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.164 [2024-04-15 18:18:24.896571] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.164 [2024-04-15 18:18:24.896985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.164 [2024-04-15 18:18:24.897214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.164 [2024-04-15 18:18:24.897243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.164 [2024-04-15 18:18:24.897261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.165 [2024-04-15 18:18:24.897498] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.165 [2024-04-15 18:18:24.897740] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.165 [2024-04-15 18:18:24.897764] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.165 [2024-04-15 18:18:24.897779] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.165 [2024-04-15 18:18:24.901337] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.165 [2024-04-15 18:18:24.910552] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.165 [2024-04-15 18:18:24.910984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.165 [2024-04-15 18:18:24.911180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.165 [2024-04-15 18:18:24.911211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.165 [2024-04-15 18:18:24.911228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.165 [2024-04-15 18:18:24.911465] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.165 [2024-04-15 18:18:24.911707] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.165 [2024-04-15 18:18:24.911731] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.165 [2024-04-15 18:18:24.911747] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.165 [2024-04-15 18:18:24.915303] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.165 [2024-04-15 18:18:24.924515] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.165 [2024-04-15 18:18:24.924954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.165 [2024-04-15 18:18:24.925125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.165 [2024-04-15 18:18:24.925160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.165 [2024-04-15 18:18:24.925178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.165 [2024-04-15 18:18:24.925415] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.165 [2024-04-15 18:18:24.925657] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.165 [2024-04-15 18:18:24.925681] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.165 [2024-04-15 18:18:24.925696] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.165 [2024-04-15 18:18:24.929254] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.165 [2024-04-15 18:18:24.938590] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.165 [2024-04-15 18:18:24.938997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.165 [2024-04-15 18:18:24.939152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.165 [2024-04-15 18:18:24.939182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.165 [2024-04-15 18:18:24.939200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.165 [2024-04-15 18:18:24.939437] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.165 [2024-04-15 18:18:24.939679] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.165 [2024-04-15 18:18:24.939703] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.165 [2024-04-15 18:18:24.939718] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.165 [2024-04-15 18:18:24.943279] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.165 [2024-04-15 18:18:24.952502] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.165 [2024-04-15 18:18:24.952933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.165 [2024-04-15 18:18:24.953133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.165 [2024-04-15 18:18:24.953162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.165 [2024-04-15 18:18:24.953180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.165 [2024-04-15 18:18:24.953417] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.165 [2024-04-15 18:18:24.953658] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.165 [2024-04-15 18:18:24.953682] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.165 [2024-04-15 18:18:24.953698] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.165 [2024-04-15 18:18:24.957257] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.165 [2024-04-15 18:18:24.966557] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.165 [2024-04-15 18:18:24.967010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.165 [2024-04-15 18:18:24.967199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.165 [2024-04-15 18:18:24.967229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.165 [2024-04-15 18:18:24.967253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.165 [2024-04-15 18:18:24.967491] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.165 [2024-04-15 18:18:24.967733] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.165 [2024-04-15 18:18:24.967757] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.165 [2024-04-15 18:18:24.967772] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.165 [2024-04-15 18:18:24.971331] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.165 [2024-04-15 18:18:24.980550] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.165 [2024-04-15 18:18:24.980999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.165 [2024-04-15 18:18:24.981202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.165 [2024-04-15 18:18:24.981232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.165 [2024-04-15 18:18:24.981249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.165 [2024-04-15 18:18:24.981486] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.165 [2024-04-15 18:18:24.981727] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.165 [2024-04-15 18:18:24.981751] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.165 [2024-04-15 18:18:24.981767] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.165 [2024-04-15 18:18:24.985325] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.165 [2024-04-15 18:18:24.994538] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.165 [2024-04-15 18:18:24.994986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.165 [2024-04-15 18:18:24.995206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.165 [2024-04-15 18:18:24.995235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.165 [2024-04-15 18:18:24.995252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.165 [2024-04-15 18:18:24.995489] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.165 [2024-04-15 18:18:24.995730] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.165 [2024-04-15 18:18:24.995754] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.165 [2024-04-15 18:18:24.995770] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.165 [2024-04-15 18:18:24.999327] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.165 [2024-04-15 18:18:25.008541] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.165 [2024-04-15 18:18:25.008989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.165 [2024-04-15 18:18:25.009190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.165 [2024-04-15 18:18:25.009219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.165 [2024-04-15 18:18:25.009237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.166 [2024-04-15 18:18:25.009479] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.166 [2024-04-15 18:18:25.009721] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.166 [2024-04-15 18:18:25.009745] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.166 [2024-04-15 18:18:25.009760] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.166 [2024-04-15 18:18:25.013319] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.166 [2024-04-15 18:18:25.022544] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.166 [2024-04-15 18:18:25.023004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.166 [2024-04-15 18:18:25.023206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.166 [2024-04-15 18:18:25.023235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.166 [2024-04-15 18:18:25.023253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.166 [2024-04-15 18:18:25.023490] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.166 [2024-04-15 18:18:25.023732] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.166 [2024-04-15 18:18:25.023756] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.166 [2024-04-15 18:18:25.023771] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.166 [2024-04-15 18:18:25.027325] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.166 [2024-04-15 18:18:25.036535] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.166 [2024-04-15 18:18:25.036945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.166 [2024-04-15 18:18:25.037100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.166 [2024-04-15 18:18:25.037130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.166 [2024-04-15 18:18:25.037148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.166 [2024-04-15 18:18:25.037386] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.166 [2024-04-15 18:18:25.037627] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.166 [2024-04-15 18:18:25.037651] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.166 [2024-04-15 18:18:25.037667] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.166 [2024-04-15 18:18:25.041219] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.166 [2024-04-15 18:18:25.050433] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.166 [2024-04-15 18:18:25.050839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.166 [2024-04-15 18:18:25.051015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.166 [2024-04-15 18:18:25.051044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.166 [2024-04-15 18:18:25.051071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.166 [2024-04-15 18:18:25.051310] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.166 [2024-04-15 18:18:25.051558] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.166 [2024-04-15 18:18:25.051582] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.166 [2024-04-15 18:18:25.051598] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.166 [2024-04-15 18:18:25.055152] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.166 [2024-04-15 18:18:25.064368] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.166 [2024-04-15 18:18:25.064791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.166 [2024-04-15 18:18:25.064969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.166 [2024-04-15 18:18:25.064997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.166 [2024-04-15 18:18:25.065014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.166 [2024-04-15 18:18:25.065262] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.166 [2024-04-15 18:18:25.065504] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.166 [2024-04-15 18:18:25.065528] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.166 [2024-04-15 18:18:25.065543] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.166 [2024-04-15 18:18:25.069106] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.166 [2024-04-15 18:18:25.078352] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.166 [2024-04-15 18:18:25.078795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.166 [2024-04-15 18:18:25.078971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.166 [2024-04-15 18:18:25.078999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.166 [2024-04-15 18:18:25.079017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.166 [2024-04-15 18:18:25.079266] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.166 [2024-04-15 18:18:25.079508] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.166 [2024-04-15 18:18:25.079532] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.166 [2024-04-15 18:18:25.079548] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.166 [2024-04-15 18:18:25.083104] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.166 [2024-04-15 18:18:25.092323] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.166 [2024-04-15 18:18:25.092756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.166 [2024-04-15 18:18:25.092905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.166 [2024-04-15 18:18:25.092933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.166 [2024-04-15 18:18:25.092951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.166 [2024-04-15 18:18:25.093199] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.166 [2024-04-15 18:18:25.093441] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.166 [2024-04-15 18:18:25.093471] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.166 [2024-04-15 18:18:25.093487] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.166 [2024-04-15 18:18:25.097035] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.166 [2024-04-15 18:18:25.106267] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.166 [2024-04-15 18:18:25.106723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.166 [2024-04-15 18:18:25.106943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.166 [2024-04-15 18:18:25.106971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.166 [2024-04-15 18:18:25.106989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.166 [2024-04-15 18:18:25.107235] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.166 [2024-04-15 18:18:25.107477] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.166 [2024-04-15 18:18:25.107502] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.166 [2024-04-15 18:18:25.107517] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.166 [2024-04-15 18:18:25.111070] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.426 [2024-04-15 18:18:25.120183] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.426 [2024-04-15 18:18:25.120647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.426 [2024-04-15 18:18:25.120861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.426 [2024-04-15 18:18:25.120890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.426 [2024-04-15 18:18:25.120907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.426 [2024-04-15 18:18:25.121156] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.426 [2024-04-15 18:18:25.121398] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.426 [2024-04-15 18:18:25.121427] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.426 [2024-04-15 18:18:25.121445] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.426 [2024-04-15 18:18:25.125017] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.426 [2024-04-15 18:18:25.134031] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.426 [2024-04-15 18:18:25.134484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.426 [2024-04-15 18:18:25.134682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.426 [2024-04-15 18:18:25.134710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.426 [2024-04-15 18:18:25.134728] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.426 [2024-04-15 18:18:25.134965] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.426 [2024-04-15 18:18:25.135219] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.426 [2024-04-15 18:18:25.135251] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.426 [2024-04-15 18:18:25.135272] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.426 [2024-04-15 18:18:25.138836] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.426 [2024-04-15 18:18:25.147853] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.426 [2024-04-15 18:18:25.148284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.427 [2024-04-15 18:18:25.148503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.427 [2024-04-15 18:18:25.148531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.427 [2024-04-15 18:18:25.148549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.427 [2024-04-15 18:18:25.148786] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.427 [2024-04-15 18:18:25.149027] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.427 [2024-04-15 18:18:25.149051] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.427 [2024-04-15 18:18:25.149076] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.427 [2024-04-15 18:18:25.152622] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.427 [2024-04-15 18:18:25.161844] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.427 [2024-04-15 18:18:25.162264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.427 [2024-04-15 18:18:25.162499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.427 [2024-04-15 18:18:25.162528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.427 [2024-04-15 18:18:25.162546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.427 [2024-04-15 18:18:25.162783] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.427 [2024-04-15 18:18:25.163024] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.427 [2024-04-15 18:18:25.163048] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.427 [2024-04-15 18:18:25.163076] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.427 [2024-04-15 18:18:25.166623] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.427 [2024-04-15 18:18:25.175847] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.427 [2024-04-15 18:18:25.176346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.427 [2024-04-15 18:18:25.176614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.427 [2024-04-15 18:18:25.176642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.427 [2024-04-15 18:18:25.176659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.427 [2024-04-15 18:18:25.176896] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.427 [2024-04-15 18:18:25.177147] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.427 [2024-04-15 18:18:25.177172] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.427 [2024-04-15 18:18:25.177187] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.427 [2024-04-15 18:18:25.180742] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.427 [2024-04-15 18:18:25.189891] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.427 [2024-04-15 18:18:25.190575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.427 [2024-04-15 18:18:25.190844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.427 [2024-04-15 18:18:25.190906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.427 [2024-04-15 18:18:25.190925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.427 [2024-04-15 18:18:25.191180] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.427 [2024-04-15 18:18:25.191424] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.427 [2024-04-15 18:18:25.191449] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.427 [2024-04-15 18:18:25.191464] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.427 [2024-04-15 18:18:25.195015] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.427 [2024-04-15 18:18:25.203813] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.427 [2024-04-15 18:18:25.204272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.427 [2024-04-15 18:18:25.204475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.427 [2024-04-15 18:18:25.204504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.427 [2024-04-15 18:18:25.204522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.427 [2024-04-15 18:18:25.204762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.427 [2024-04-15 18:18:25.205004] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.427 [2024-04-15 18:18:25.205028] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.427 [2024-04-15 18:18:25.205044] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.427 [2024-04-15 18:18:25.208601] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.427 [2024-04-15 18:18:25.217811] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.427 [2024-04-15 18:18:25.218324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.427 [2024-04-15 18:18:25.218498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.427 [2024-04-15 18:18:25.218527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.427 [2024-04-15 18:18:25.218545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.427 [2024-04-15 18:18:25.218783] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.427 [2024-04-15 18:18:25.219025] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.427 [2024-04-15 18:18:25.219050] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.427 [2024-04-15 18:18:25.219076] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.427 [2024-04-15 18:18:25.222627] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.427 [2024-04-15 18:18:25.231644] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.427 [2024-04-15 18:18:25.232113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.427 [2024-04-15 18:18:25.232253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.427 [2024-04-15 18:18:25.232283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.427 [2024-04-15 18:18:25.232301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.427 [2024-04-15 18:18:25.232538] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.427 [2024-04-15 18:18:25.232780] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.427 [2024-04-15 18:18:25.232804] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.427 [2024-04-15 18:18:25.232819] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.427 [2024-04-15 18:18:25.236376] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.427 [2024-04-15 18:18:25.245588] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.427 [2024-04-15 18:18:25.246069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.427 [2024-04-15 18:18:25.246288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.427 [2024-04-15 18:18:25.246327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.427 [2024-04-15 18:18:25.246345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.427 [2024-04-15 18:18:25.246582] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.427 [2024-04-15 18:18:25.246824] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.427 [2024-04-15 18:18:25.246848] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.427 [2024-04-15 18:18:25.246863] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.427 [2024-04-15 18:18:25.250420] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.427 [2024-04-15 18:18:25.259426] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.427 [2024-04-15 18:18:25.259874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.427 [2024-04-15 18:18:25.260072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.427 [2024-04-15 18:18:25.260102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.427 [2024-04-15 18:18:25.260120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.427 [2024-04-15 18:18:25.260357] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.428 [2024-04-15 18:18:25.260600] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.428 [2024-04-15 18:18:25.260623] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.428 [2024-04-15 18:18:25.260638] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.428 [2024-04-15 18:18:25.264192] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.428 [2024-04-15 18:18:25.273410] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.428 [2024-04-15 18:18:25.273862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.428 [2024-04-15 18:18:25.274066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.428 [2024-04-15 18:18:25.274096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.428 [2024-04-15 18:18:25.274114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.428 [2024-04-15 18:18:25.274351] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.428 [2024-04-15 18:18:25.274594] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.428 [2024-04-15 18:18:25.274617] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.428 [2024-04-15 18:18:25.274633] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.428 [2024-04-15 18:18:25.278197] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.428 [2024-04-15 18:18:25.287424] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.428 [2024-04-15 18:18:25.287867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.428 [2024-04-15 18:18:25.288052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.428 [2024-04-15 18:18:25.288090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.428 [2024-04-15 18:18:25.288108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.428 [2024-04-15 18:18:25.288346] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.428 [2024-04-15 18:18:25.288588] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.428 [2024-04-15 18:18:25.288612] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.428 [2024-04-15 18:18:25.288627] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.428 [2024-04-15 18:18:25.292182] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.428 [2024-04-15 18:18:25.301398] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.428 [2024-04-15 18:18:25.301842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.428 [2024-04-15 18:18:25.302067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.428 [2024-04-15 18:18:25.302097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.428 [2024-04-15 18:18:25.302115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.428 [2024-04-15 18:18:25.302352] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.428 [2024-04-15 18:18:25.302595] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.428 [2024-04-15 18:18:25.302618] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.428 [2024-04-15 18:18:25.302633] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.428 [2024-04-15 18:18:25.306192] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.428 [2024-04-15 18:18:25.315406] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.428 [2024-04-15 18:18:25.315914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.428 [2024-04-15 18:18:25.316116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.428 [2024-04-15 18:18:25.316153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.428 [2024-04-15 18:18:25.316171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.428 [2024-04-15 18:18:25.316408] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.428 [2024-04-15 18:18:25.316650] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.428 [2024-04-15 18:18:25.316674] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.428 [2024-04-15 18:18:25.316689] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.428 [2024-04-15 18:18:25.320246] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.428 [2024-04-15 18:18:25.329256] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.428 [2024-04-15 18:18:25.329698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.428 [2024-04-15 18:18:25.329879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.428 [2024-04-15 18:18:25.329908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.428 [2024-04-15 18:18:25.329926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.428 [2024-04-15 18:18:25.330174] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.428 [2024-04-15 18:18:25.330417] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.428 [2024-04-15 18:18:25.330441] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.428 [2024-04-15 18:18:25.330456] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.428 [2024-04-15 18:18:25.334005] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.428 [2024-04-15 18:18:25.343219] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.428 [2024-04-15 18:18:25.343659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.428 [2024-04-15 18:18:25.343841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.428 [2024-04-15 18:18:25.343870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.428 [2024-04-15 18:18:25.343887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.428 [2024-04-15 18:18:25.344138] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.428 [2024-04-15 18:18:25.344381] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.428 [2024-04-15 18:18:25.344404] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.428 [2024-04-15 18:18:25.344420] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.428 [2024-04-15 18:18:25.347965] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.428 [2024-04-15 18:18:25.357186] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.428 [2024-04-15 18:18:25.357701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.428 [2024-04-15 18:18:25.357988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.428 [2024-04-15 18:18:25.358016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.428 [2024-04-15 18:18:25.358048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.428 [2024-04-15 18:18:25.358297] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.428 [2024-04-15 18:18:25.358539] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.428 [2024-04-15 18:18:25.358563] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.428 [2024-04-15 18:18:25.358578] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.428 [2024-04-15 18:18:25.362127] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.428 [2024-04-15 18:18:25.371134] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.428 [2024-04-15 18:18:25.371573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.428 [2024-04-15 18:18:25.371822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.428 [2024-04-15 18:18:25.371851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.428 [2024-04-15 18:18:25.371868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.428 [2024-04-15 18:18:25.372116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.429 [2024-04-15 18:18:25.372359] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.429 [2024-04-15 18:18:25.372382] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.429 [2024-04-15 18:18:25.372398] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.429 [2024-04-15 18:18:25.375961] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.689 [2024-04-15 18:18:25.385044] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.689 [2024-04-15 18:18:25.385465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.689 [2024-04-15 18:18:25.385639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.689 [2024-04-15 18:18:25.385668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.689 [2024-04-15 18:18:25.385686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.689 [2024-04-15 18:18:25.385923] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.689 [2024-04-15 18:18:25.386176] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.689 [2024-04-15 18:18:25.386201] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.689 [2024-04-15 18:18:25.386216] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.689 [2024-04-15 18:18:25.389762] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.689 [2024-04-15 18:18:25.398975] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.689 [2024-04-15 18:18:25.399487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.689 [2024-04-15 18:18:25.399672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.689 [2024-04-15 18:18:25.399703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.689 [2024-04-15 18:18:25.399720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.689 [2024-04-15 18:18:25.399964] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.689 [2024-04-15 18:18:25.400217] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.689 [2024-04-15 18:18:25.400242] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.689 [2024-04-15 18:18:25.400257] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.689 [2024-04-15 18:18:25.403807] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.690 [2024-04-15 18:18:25.412816] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.690 [2024-04-15 18:18:25.413269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.690 [2024-04-15 18:18:25.413442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.690 [2024-04-15 18:18:25.413471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.690 [2024-04-15 18:18:25.413489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.690 [2024-04-15 18:18:25.413726] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.690 [2024-04-15 18:18:25.413967] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.690 [2024-04-15 18:18:25.413991] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.690 [2024-04-15 18:18:25.414006] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.690 [2024-04-15 18:18:25.417563] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.690 [2024-04-15 18:18:25.426777] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.690 [2024-04-15 18:18:25.427230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.690 [2024-04-15 18:18:25.427427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.690 [2024-04-15 18:18:25.427457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.690 [2024-04-15 18:18:25.427475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.690 [2024-04-15 18:18:25.427712] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.690 [2024-04-15 18:18:25.427953] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.690 [2024-04-15 18:18:25.427977] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.690 [2024-04-15 18:18:25.427992] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.690 [2024-04-15 18:18:25.431547] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.690 [2024-04-15 18:18:25.440670] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.690 [2024-04-15 18:18:25.441262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.690 [2024-04-15 18:18:25.441425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.690 [2024-04-15 18:18:25.441455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.690 [2024-04-15 18:18:25.441473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.690 [2024-04-15 18:18:25.441711] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.690 [2024-04-15 18:18:25.441959] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.690 [2024-04-15 18:18:25.441984] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.690 [2024-04-15 18:18:25.441999] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.690 [2024-04-15 18:18:25.445556] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.690 [2024-04-15 18:18:25.454559] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.690 [2024-04-15 18:18:25.455109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.690 [2024-04-15 18:18:25.455276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.690 [2024-04-15 18:18:25.455305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.690 [2024-04-15 18:18:25.455323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.690 [2024-04-15 18:18:25.455560] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.690 [2024-04-15 18:18:25.455802] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.690 [2024-04-15 18:18:25.455825] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.690 [2024-04-15 18:18:25.455841] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.690 [2024-04-15 18:18:25.459398] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.690 [2024-04-15 18:18:25.468405] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.690 [2024-04-15 18:18:25.468862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.690 [2024-04-15 18:18:25.469092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.690 [2024-04-15 18:18:25.469132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.690 [2024-04-15 18:18:25.469149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.690 [2024-04-15 18:18:25.469387] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.690 [2024-04-15 18:18:25.469628] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.690 [2024-04-15 18:18:25.469651] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.690 [2024-04-15 18:18:25.469666] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.690 [2024-04-15 18:18:25.473223] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.690 [2024-04-15 18:18:25.482240] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.690 [2024-04-15 18:18:25.482746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.690 [2024-04-15 18:18:25.482939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.690 [2024-04-15 18:18:25.482967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.690 [2024-04-15 18:18:25.482985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.690 [2024-04-15 18:18:25.483233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.690 [2024-04-15 18:18:25.483475] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.690 [2024-04-15 18:18:25.483504] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.690 [2024-04-15 18:18:25.483520] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.690 [2024-04-15 18:18:25.487078] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.690 [2024-04-15 18:18:25.496103] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.690 [2024-04-15 18:18:25.496614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.690 [2024-04-15 18:18:25.496834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.690 [2024-04-15 18:18:25.496863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.690 [2024-04-15 18:18:25.496880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.690 [2024-04-15 18:18:25.497127] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.690 [2024-04-15 18:18:25.497370] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.690 [2024-04-15 18:18:25.497394] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.690 [2024-04-15 18:18:25.497409] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.690 [2024-04-15 18:18:25.500958] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.690 [2024-04-15 18:18:25.509970] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.690 [2024-04-15 18:18:25.510573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.690 [2024-04-15 18:18:25.510780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.690 [2024-04-15 18:18:25.510809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.690 [2024-04-15 18:18:25.510827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.690 [2024-04-15 18:18:25.511074] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.690 [2024-04-15 18:18:25.511316] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.690 [2024-04-15 18:18:25.511340] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.690 [2024-04-15 18:18:25.511355] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.690 [2024-04-15 18:18:25.514905] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.690 [2024-04-15 18:18:25.523941] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.690 [2024-04-15 18:18:25.524402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.690 [2024-04-15 18:18:25.524638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.690 [2024-04-15 18:18:25.524667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.690 [2024-04-15 18:18:25.524684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.691 [2024-04-15 18:18:25.524921] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.691 [2024-04-15 18:18:25.525177] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.691 [2024-04-15 18:18:25.525202] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.691 [2024-04-15 18:18:25.525224] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.691 [2024-04-15 18:18:25.528778] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.691 [2024-04-15 18:18:25.537806] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.691 [2024-04-15 18:18:25.538247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.691 [2024-04-15 18:18:25.538428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.691 [2024-04-15 18:18:25.538456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.691 [2024-04-15 18:18:25.538474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.691 [2024-04-15 18:18:25.538711] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.691 [2024-04-15 18:18:25.538952] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.691 [2024-04-15 18:18:25.538976] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.691 [2024-04-15 18:18:25.538992] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.691 [2024-04-15 18:18:25.542548] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.691 [2024-04-15 18:18:25.551785] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.691 [2024-04-15 18:18:25.552195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.691 [2024-04-15 18:18:25.552362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.691 [2024-04-15 18:18:25.552391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.691 [2024-04-15 18:18:25.552408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.691 [2024-04-15 18:18:25.552645] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.691 [2024-04-15 18:18:25.552886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.691 [2024-04-15 18:18:25.552910] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.691 [2024-04-15 18:18:25.552925] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.691 [2024-04-15 18:18:25.556482] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.691 [2024-04-15 18:18:25.565719] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.691 [2024-04-15 18:18:25.566138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.691 [2024-04-15 18:18:25.566309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.691 [2024-04-15 18:18:25.566338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.691 [2024-04-15 18:18:25.566355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.691 [2024-04-15 18:18:25.566592] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.691 [2024-04-15 18:18:25.566834] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.691 [2024-04-15 18:18:25.566857] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.691 [2024-04-15 18:18:25.566873] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.691 [2024-04-15 18:18:25.570442] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.691 [2024-04-15 18:18:25.579676] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.691 [2024-04-15 18:18:25.580147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.691 [2024-04-15 18:18:25.580338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.691 [2024-04-15 18:18:25.580367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.691 [2024-04-15 18:18:25.580385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.691 [2024-04-15 18:18:25.580623] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.691 [2024-04-15 18:18:25.580865] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.691 [2024-04-15 18:18:25.580889] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.691 [2024-04-15 18:18:25.580905] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.691 [2024-04-15 18:18:25.584462] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.691 [2024-04-15 18:18:25.593687] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.691 [2024-04-15 18:18:25.594186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.691 [2024-04-15 18:18:25.594499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.691 [2024-04-15 18:18:25.594528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.691 [2024-04-15 18:18:25.594546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.691 [2024-04-15 18:18:25.594783] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.691 [2024-04-15 18:18:25.595024] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.691 [2024-04-15 18:18:25.595048] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.691 [2024-04-15 18:18:25.595073] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.691 [2024-04-15 18:18:25.598623] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.691 [2024-04-15 18:18:25.607639] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.691 [2024-04-15 18:18:25.608101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.691 [2024-04-15 18:18:25.608267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.691 [2024-04-15 18:18:25.608296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.691 [2024-04-15 18:18:25.608313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.691 [2024-04-15 18:18:25.608550] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.691 [2024-04-15 18:18:25.608791] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.691 [2024-04-15 18:18:25.608815] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.691 [2024-04-15 18:18:25.608830] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.691 [2024-04-15 18:18:25.612408] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.691 [2024-04-15 18:18:25.621655] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.691 [2024-04-15 18:18:25.622155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.691 [2024-04-15 18:18:25.622309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.691 [2024-04-15 18:18:25.622338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.691 [2024-04-15 18:18:25.622356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.691 [2024-04-15 18:18:25.622594] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.691 [2024-04-15 18:18:25.622836] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.691 [2024-04-15 18:18:25.622859] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.691 [2024-04-15 18:18:25.622874] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.691 [2024-04-15 18:18:25.626436] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.691 [2024-04-15 18:18:25.635659] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.691 [2024-04-15 18:18:25.636150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.691 [2024-04-15 18:18:25.636300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.691 [2024-04-15 18:18:25.636340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.691 [2024-04-15 18:18:25.636357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.691 [2024-04-15 18:18:25.636603] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.691 [2024-04-15 18:18:25.636846] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.691 [2024-04-15 18:18:25.636870] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.691 [2024-04-15 18:18:25.636885] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.691 [2024-04-15 18:18:25.640469] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.952 [2024-04-15 18:18:25.649556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.952 [2024-04-15 18:18:25.650063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.952 [2024-04-15 18:18:25.650249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.952 [2024-04-15 18:18:25.650278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.952 [2024-04-15 18:18:25.650295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.952 [2024-04-15 18:18:25.650533] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.952 [2024-04-15 18:18:25.650774] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.952 [2024-04-15 18:18:25.650799] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.952 [2024-04-15 18:18:25.650814] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.952 [2024-04-15 18:18:25.654367] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.952 [2024-04-15 18:18:25.663381] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.952 [2024-04-15 18:18:25.663907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.952 [2024-04-15 18:18:25.664194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.952 [2024-04-15 18:18:25.664223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.952 [2024-04-15 18:18:25.664241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.952 [2024-04-15 18:18:25.664478] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.952 [2024-04-15 18:18:25.664720] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.952 [2024-04-15 18:18:25.664744] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.952 [2024-04-15 18:18:25.664759] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.952 [2024-04-15 18:18:25.668320] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.952 [2024-04-15 18:18:25.677326] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.952 [2024-04-15 18:18:25.677800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.952 [2024-04-15 18:18:25.677976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.952 [2024-04-15 18:18:25.678005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.952 [2024-04-15 18:18:25.678023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.952 [2024-04-15 18:18:25.678268] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.952 [2024-04-15 18:18:25.678511] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.952 [2024-04-15 18:18:25.678535] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.952 [2024-04-15 18:18:25.678550] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.952 [2024-04-15 18:18:25.682105] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.952 [2024-04-15 18:18:25.691437] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.952 [2024-04-15 18:18:25.691902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.952 [2024-04-15 18:18:25.692115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.952 [2024-04-15 18:18:25.692148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.952 [2024-04-15 18:18:25.692166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.952 [2024-04-15 18:18:25.692404] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.952 [2024-04-15 18:18:25.692646] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.952 [2024-04-15 18:18:25.692670] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.952 [2024-04-15 18:18:25.692686] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.952 [2024-04-15 18:18:25.696241] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.952 [2024-04-15 18:18:25.705261] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.952 [2024-04-15 18:18:25.705776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.952 [2024-04-15 18:18:25.706035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.952 [2024-04-15 18:18:25.706078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.952 [2024-04-15 18:18:25.706098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.952 [2024-04-15 18:18:25.706336] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.952 [2024-04-15 18:18:25.706577] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.952 [2024-04-15 18:18:25.706601] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.952 [2024-04-15 18:18:25.706617] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.952 [2024-04-15 18:18:25.710165] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.952 [2024-04-15 18:18:25.719177] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.952 [2024-04-15 18:18:25.719692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.952 [2024-04-15 18:18:25.719972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.952 [2024-04-15 18:18:25.720001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.952 [2024-04-15 18:18:25.720018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.952 [2024-04-15 18:18:25.720265] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.952 [2024-04-15 18:18:25.720507] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.952 [2024-04-15 18:18:25.720531] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.952 [2024-04-15 18:18:25.720546] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.952 [2024-04-15 18:18:25.724101] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.952 [2024-04-15 18:18:25.733117] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.952 [2024-04-15 18:18:25.733619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.952 [2024-04-15 18:18:25.733908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.952 [2024-04-15 18:18:25.733937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.952 [2024-04-15 18:18:25.733955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.952 [2024-04-15 18:18:25.734205] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.952 [2024-04-15 18:18:25.734447] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.953 [2024-04-15 18:18:25.734471] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.953 [2024-04-15 18:18:25.734486] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.953 [2024-04-15 18:18:25.738038] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.953 [2024-04-15 18:18:25.747047] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.953 [2024-04-15 18:18:25.747554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.953 [2024-04-15 18:18:25.747835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.953 [2024-04-15 18:18:25.747863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.953 [2024-04-15 18:18:25.747886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.953 [2024-04-15 18:18:25.748137] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.953 [2024-04-15 18:18:25.748379] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.953 [2024-04-15 18:18:25.748403] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.953 [2024-04-15 18:18:25.748418] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.953 [2024-04-15 18:18:25.751965] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.953 [2024-04-15 18:18:25.760980] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.953 [2024-04-15 18:18:25.761442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.953 [2024-04-15 18:18:25.761668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.953 [2024-04-15 18:18:25.761696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.953 [2024-04-15 18:18:25.761714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.953 [2024-04-15 18:18:25.761951] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.953 [2024-04-15 18:18:25.762204] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.953 [2024-04-15 18:18:25.762229] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.953 [2024-04-15 18:18:25.762245] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.953 [2024-04-15 18:18:25.765792] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.953 [2024-04-15 18:18:25.774804] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.953 [2024-04-15 18:18:25.775294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.953 [2024-04-15 18:18:25.775592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.953 [2024-04-15 18:18:25.775620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.953 [2024-04-15 18:18:25.775638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.953 [2024-04-15 18:18:25.775874] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.953 [2024-04-15 18:18:25.776128] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.953 [2024-04-15 18:18:25.776153] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.953 [2024-04-15 18:18:25.776168] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.953 [2024-04-15 18:18:25.779715] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.953 [2024-04-15 18:18:25.788718] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.953 [2024-04-15 18:18:25.789212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.953 [2024-04-15 18:18:25.789563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.953 [2024-04-15 18:18:25.789606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.953 [2024-04-15 18:18:25.789627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.953 [2024-04-15 18:18:25.789877] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.953 [2024-04-15 18:18:25.790134] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.953 [2024-04-15 18:18:25.790160] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.953 [2024-04-15 18:18:25.790175] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.953 [2024-04-15 18:18:25.793732] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.953 [2024-04-15 18:18:25.802536] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.953 [2024-04-15 18:18:25.803141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.953 [2024-04-15 18:18:25.803415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.953 [2024-04-15 18:18:25.803446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.953 [2024-04-15 18:18:25.803465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.953 [2024-04-15 18:18:25.803709] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.953 [2024-04-15 18:18:25.803952] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.953 [2024-04-15 18:18:25.803976] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.953 [2024-04-15 18:18:25.803992] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.953 [2024-04-15 18:18:25.807557] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.953 [2024-04-15 18:18:25.816361] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.953 [2024-04-15 18:18:25.816964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.953 [2024-04-15 18:18:25.817344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.953 [2024-04-15 18:18:25.817388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.953 [2024-04-15 18:18:25.817408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.953 [2024-04-15 18:18:25.817652] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.953 [2024-04-15 18:18:25.817896] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.953 [2024-04-15 18:18:25.817919] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.953 [2024-04-15 18:18:25.817936] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.953 [2024-04-15 18:18:25.821500] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.953 [2024-04-15 18:18:25.830312] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.953 [2024-04-15 18:18:25.830828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.953 [2024-04-15 18:18:25.831092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.953 [2024-04-15 18:18:25.831122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.953 [2024-04-15 18:18:25.831140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.953 [2024-04-15 18:18:25.831378] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.953 [2024-04-15 18:18:25.831627] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.953 [2024-04-15 18:18:25.831652] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.953 [2024-04-15 18:18:25.831667] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.953 [2024-04-15 18:18:25.835228] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.953 [2024-04-15 18:18:25.844242] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.953 [2024-04-15 18:18:25.844740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.953 [2024-04-15 18:18:25.845072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.953 [2024-04-15 18:18:25.845101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.953 [2024-04-15 18:18:25.845119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.953 [2024-04-15 18:18:25.845357] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.953 [2024-04-15 18:18:25.845599] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.953 [2024-04-15 18:18:25.845622] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.953 [2024-04-15 18:18:25.845638] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.953 [2024-04-15 18:18:25.849197] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.953 [2024-04-15 18:18:25.858211] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.953 [2024-04-15 18:18:25.858708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.953 [2024-04-15 18:18:25.858956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.954 [2024-04-15 18:18:25.858985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.954 [2024-04-15 18:18:25.859002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.954 [2024-04-15 18:18:25.859253] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.954 [2024-04-15 18:18:25.859495] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.954 [2024-04-15 18:18:25.859519] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.954 [2024-04-15 18:18:25.859534] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.954 [2024-04-15 18:18:25.863091] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.954 [2024-04-15 18:18:25.872107] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.954 [2024-04-15 18:18:25.872598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.954 [2024-04-15 18:18:25.872845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.954 [2024-04-15 18:18:25.872874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.954 [2024-04-15 18:18:25.872892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.954 [2024-04-15 18:18:25.873141] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.954 [2024-04-15 18:18:25.873384] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.954 [2024-04-15 18:18:25.873416] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.954 [2024-04-15 18:18:25.873433] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.954 [2024-04-15 18:18:25.876984] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.954 [2024-04-15 18:18:25.885996] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.954 [2024-04-15 18:18:25.886505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.954 [2024-04-15 18:18:25.886741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.954 [2024-04-15 18:18:25.886770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.954 [2024-04-15 18:18:25.886787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.954 [2024-04-15 18:18:25.887024] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.954 [2024-04-15 18:18:25.887276] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.954 [2024-04-15 18:18:25.887300] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.954 [2024-04-15 18:18:25.887316] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:36.954 [2024-04-15 18:18:25.890865] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.954 [2024-04-15 18:18:25.899901] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:36.954 [2024-04-15 18:18:25.900367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.954 [2024-04-15 18:18:25.900531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.954 [2024-04-15 18:18:25.900560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:36.954 [2024-04-15 18:18:25.900577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:36.954 [2024-04-15 18:18:25.900815] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:36.954 [2024-04-15 18:18:25.901056] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:36.954 [2024-04-15 18:18:25.901102] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:36.954 [2024-04-15 18:18:25.901121] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.214 [2024-04-15 18:18:25.904723] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.214 [2024-04-15 18:18:25.913769] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.214 [2024-04-15 18:18:25.914270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.214 [2024-04-15 18:18:25.914531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.214 [2024-04-15 18:18:25.914560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.214 [2024-04-15 18:18:25.914577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.214 [2024-04-15 18:18:25.914815] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.214 [2024-04-15 18:18:25.915067] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.214 [2024-04-15 18:18:25.915091] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.214 [2024-04-15 18:18:25.915113] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.214 [2024-04-15 18:18:25.918662] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.214 [2024-04-15 18:18:25.927670] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.214 [2024-04-15 18:18:25.928188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.214 [2024-04-15 18:18:25.928417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.214 [2024-04-15 18:18:25.928446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.214 [2024-04-15 18:18:25.928463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.214 [2024-04-15 18:18:25.928701] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.214 [2024-04-15 18:18:25.928942] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.214 [2024-04-15 18:18:25.928965] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.214 [2024-04-15 18:18:25.928981] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.214 [2024-04-15 18:18:25.932534] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.214 [2024-04-15 18:18:25.941731] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.214 [2024-04-15 18:18:25.942253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.214 [2024-04-15 18:18:25.942451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.214 [2024-04-15 18:18:25.942493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.214 [2024-04-15 18:18:25.942511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.214 [2024-04-15 18:18:25.942764] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.214 [2024-04-15 18:18:25.943007] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.214 [2024-04-15 18:18:25.943031] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.214 [2024-04-15 18:18:25.943046] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.214 [2024-04-15 18:18:25.946603] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.214 [2024-04-15 18:18:25.955617] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.214 [2024-04-15 18:18:25.956121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.214 [2024-04-15 18:18:25.956291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.214 [2024-04-15 18:18:25.956320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.214 [2024-04-15 18:18:25.956337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.214 [2024-04-15 18:18:25.956575] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.214 [2024-04-15 18:18:25.956817] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.214 [2024-04-15 18:18:25.956841] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.214 [2024-04-15 18:18:25.956858] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.214 [2024-04-15 18:18:25.960422] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.214 [2024-04-15 18:18:25.969434] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.214 [2024-04-15 18:18:25.969929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.214 [2024-04-15 18:18:25.970186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.214 [2024-04-15 18:18:25.970223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.214 [2024-04-15 18:18:25.970240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.214 [2024-04-15 18:18:25.970478] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.214 [2024-04-15 18:18:25.970720] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.214 [2024-04-15 18:18:25.970743] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.214 [2024-04-15 18:18:25.970759] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.214 [2024-04-15 18:18:25.974327] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.214 [2024-04-15 18:18:25.983345] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.214 [2024-04-15 18:18:25.983833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.214 [2024-04-15 18:18:25.984045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.214 [2024-04-15 18:18:25.984083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.214 [2024-04-15 18:18:25.984102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.214 [2024-04-15 18:18:25.984339] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.214 [2024-04-15 18:18:25.984581] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.214 [2024-04-15 18:18:25.984605] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.214 [2024-04-15 18:18:25.984621] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.214 [2024-04-15 18:18:25.988178] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.214 [2024-04-15 18:18:25.997278] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.214 [2024-04-15 18:18:25.997804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.214 [2024-04-15 18:18:25.998071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.214 [2024-04-15 18:18:25.998101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.214 [2024-04-15 18:18:25.998118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.214 [2024-04-15 18:18:25.998355] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.214 [2024-04-15 18:18:25.998597] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.214 [2024-04-15 18:18:25.998621] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.214 [2024-04-15 18:18:25.998637] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.214 [2024-04-15 18:18:26.002195] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.214 [2024-04-15 18:18:26.011226] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.215 [2024-04-15 18:18:26.011735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.215 [2024-04-15 18:18:26.011980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.215 [2024-04-15 18:18:26.012009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.215 [2024-04-15 18:18:26.012026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.215 [2024-04-15 18:18:26.012286] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.215 [2024-04-15 18:18:26.012529] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.215 [2024-04-15 18:18:26.012552] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.215 [2024-04-15 18:18:26.012568] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.215 [2024-04-15 18:18:26.016125] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.215 [2024-04-15 18:18:26.025154] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.215 [2024-04-15 18:18:26.025616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.215 [2024-04-15 18:18:26.025838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.215 [2024-04-15 18:18:26.025867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.215 [2024-04-15 18:18:26.025884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.215 [2024-04-15 18:18:26.026135] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.215 [2024-04-15 18:18:26.026378] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.215 [2024-04-15 18:18:26.026402] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.215 [2024-04-15 18:18:26.026418] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.215 [2024-04-15 18:18:26.029967] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.215 [2024-04-15 18:18:26.038984] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.215 [2024-04-15 18:18:26.039506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.215 [2024-04-15 18:18:26.039762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.215 [2024-04-15 18:18:26.039790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.215 [2024-04-15 18:18:26.039808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.215 [2024-04-15 18:18:26.040045] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.215 [2024-04-15 18:18:26.040298] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.215 [2024-04-15 18:18:26.040323] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.215 [2024-04-15 18:18:26.040338] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.215 [2024-04-15 18:18:26.043886] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.215 [2024-04-15 18:18:26.052910] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.215 [2024-04-15 18:18:26.053525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.215 [2024-04-15 18:18:26.053775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.215 [2024-04-15 18:18:26.053807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.215 [2024-04-15 18:18:26.053826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.215 [2024-04-15 18:18:26.054082] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.215 [2024-04-15 18:18:26.054327] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.215 [2024-04-15 18:18:26.054351] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.215 [2024-04-15 18:18:26.054367] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.215 [2024-04-15 18:18:26.057922] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.215 [2024-04-15 18:18:26.066726] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.215 [2024-04-15 18:18:26.067340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.215 [2024-04-15 18:18:26.067723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.215 [2024-04-15 18:18:26.067755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.215 [2024-04-15 18:18:26.067773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.215 [2024-04-15 18:18:26.068017] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.215 [2024-04-15 18:18:26.068277] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.215 [2024-04-15 18:18:26.068303] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.215 [2024-04-15 18:18:26.068319] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.215 [2024-04-15 18:18:26.071873] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.215 [2024-04-15 18:18:26.080680] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.215 [2024-04-15 18:18:26.081175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.215 [2024-04-15 18:18:26.081409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.215 [2024-04-15 18:18:26.081438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.215 [2024-04-15 18:18:26.081456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.215 [2024-04-15 18:18:26.081693] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.215 [2024-04-15 18:18:26.081935] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.215 [2024-04-15 18:18:26.081959] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.215 [2024-04-15 18:18:26.081975] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.215 [2024-04-15 18:18:26.085538] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.215 [2024-04-15 18:18:26.094556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.215 [2024-04-15 18:18:26.095088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.215 [2024-04-15 18:18:26.095391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.215 [2024-04-15 18:18:26.095428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.215 [2024-04-15 18:18:26.095447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.215 [2024-04-15 18:18:26.095685] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.215 [2024-04-15 18:18:26.095927] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.215 [2024-04-15 18:18:26.095951] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.215 [2024-04-15 18:18:26.095967] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.215 [2024-04-15 18:18:26.099533] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.215 [2024-04-15 18:18:26.108550] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.215 [2024-04-15 18:18:26.108978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.215 [2024-04-15 18:18:26.109210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.215 [2024-04-15 18:18:26.109240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.215 [2024-04-15 18:18:26.109258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.215 [2024-04-15 18:18:26.109495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.215 [2024-04-15 18:18:26.109736] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.215 [2024-04-15 18:18:26.109760] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.215 [2024-04-15 18:18:26.109776] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.215 [2024-04-15 18:18:26.113341] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.215 [2024-04-15 18:18:26.122564] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.215 [2024-04-15 18:18:26.123117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.215 [2024-04-15 18:18:26.123401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.215 [2024-04-15 18:18:26.123430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.215 [2024-04-15 18:18:26.123447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.215 [2024-04-15 18:18:26.123685] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.215 [2024-04-15 18:18:26.123927] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.216 [2024-04-15 18:18:26.123951] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.216 [2024-04-15 18:18:26.123966] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.216 [2024-04-15 18:18:26.127526] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.216 [2024-04-15 18:18:26.136537] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.216 [2024-04-15 18:18:26.137067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.216 [2024-04-15 18:18:26.137354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.216 [2024-04-15 18:18:26.137383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.216 [2024-04-15 18:18:26.137407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.216 [2024-04-15 18:18:26.137646] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.216 [2024-04-15 18:18:26.137888] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.216 [2024-04-15 18:18:26.137912] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.216 [2024-04-15 18:18:26.137927] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.216 [2024-04-15 18:18:26.141487] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.216 [2024-04-15 18:18:26.150506] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.216 [2024-04-15 18:18:26.151134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.216 [2024-04-15 18:18:26.151358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.216 [2024-04-15 18:18:26.151390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.216 [2024-04-15 18:18:26.151409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.216 [2024-04-15 18:18:26.151653] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.216 [2024-04-15 18:18:26.151897] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.216 [2024-04-15 18:18:26.151922] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.216 [2024-04-15 18:18:26.151938] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.216 [2024-04-15 18:18:26.155497] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.216 [2024-04-15 18:18:26.164572] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.216 [2024-04-15 18:18:26.165073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.216 [2024-04-15 18:18:26.165259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.216 [2024-04-15 18:18:26.165289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.216 [2024-04-15 18:18:26.165307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.216 [2024-04-15 18:18:26.165545] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.216 [2024-04-15 18:18:26.165804] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.216 [2024-04-15 18:18:26.165829] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.216 [2024-04-15 18:18:26.165845] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.477 [2024-04-15 18:18:26.169454] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.477 [2024-04-15 18:18:26.178512] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.477 [2024-04-15 18:18:26.178975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.477 [2024-04-15 18:18:26.179193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.477 [2024-04-15 18:18:26.179224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.477 [2024-04-15 18:18:26.179242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.477 [2024-04-15 18:18:26.179487] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.477 [2024-04-15 18:18:26.179729] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.477 [2024-04-15 18:18:26.179753] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.477 [2024-04-15 18:18:26.179769] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.477 [2024-04-15 18:18:26.183324] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.477 [2024-04-15 18:18:26.192502] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.477 [2024-04-15 18:18:26.193008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.477 [2024-04-15 18:18:26.193228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.477 [2024-04-15 18:18:26.193258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.477 [2024-04-15 18:18:26.193277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.477 [2024-04-15 18:18:26.193514] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.477 [2024-04-15 18:18:26.193757] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.477 [2024-04-15 18:18:26.193781] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.477 [2024-04-15 18:18:26.193797] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.477 [2024-04-15 18:18:26.197352] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.477 [2024-04-15 18:18:26.206360] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.477 [2024-04-15 18:18:26.206832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.477 [2024-04-15 18:18:26.207077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.477 [2024-04-15 18:18:26.207107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.477 [2024-04-15 18:18:26.207125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.477 [2024-04-15 18:18:26.207363] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.477 [2024-04-15 18:18:26.207605] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.477 [2024-04-15 18:18:26.207629] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.477 [2024-04-15 18:18:26.207644] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.477 [2024-04-15 18:18:26.211202] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.477 [2024-04-15 18:18:26.220209] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.477 [2024-04-15 18:18:26.220698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.477 [2024-04-15 18:18:26.220995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.477 [2024-04-15 18:18:26.221023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.477 [2024-04-15 18:18:26.221041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.477 [2024-04-15 18:18:26.221289] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.477 [2024-04-15 18:18:26.221548] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.477 [2024-04-15 18:18:26.221572] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.477 [2024-04-15 18:18:26.221587] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.477 [2024-04-15 18:18:26.225141] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.477 [2024-04-15 18:18:26.234161] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.477 [2024-04-15 18:18:26.234625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.477 [2024-04-15 18:18:26.234820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.477 [2024-04-15 18:18:26.234849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.477 [2024-04-15 18:18:26.234866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.477 [2024-04-15 18:18:26.235117] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.477 [2024-04-15 18:18:26.235360] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.477 [2024-04-15 18:18:26.235384] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.477 [2024-04-15 18:18:26.235399] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.477 [2024-04-15 18:18:26.238951] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.477 [2024-04-15 18:18:26.247989] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.477 [2024-04-15 18:18:26.248487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.477 [2024-04-15 18:18:26.248664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.477 [2024-04-15 18:18:26.248693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.477 [2024-04-15 18:18:26.248711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.477 [2024-04-15 18:18:26.248948] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.477 [2024-04-15 18:18:26.249201] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.477 [2024-04-15 18:18:26.249226] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.477 [2024-04-15 18:18:26.249241] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.477 [2024-04-15 18:18:26.252790] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.477 [2024-04-15 18:18:26.261818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.477 [2024-04-15 18:18:26.262280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.477 [2024-04-15 18:18:26.262476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.477 [2024-04-15 18:18:26.262506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.477 [2024-04-15 18:18:26.262529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.477 [2024-04-15 18:18:26.262766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.477 [2024-04-15 18:18:26.263008] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.477 [2024-04-15 18:18:26.263038] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.477 [2024-04-15 18:18:26.263054] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.477 [2024-04-15 18:18:26.266614] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.477 [2024-04-15 18:18:26.275829] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.477 [2024-04-15 18:18:26.276242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.477 [2024-04-15 18:18:26.276416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.477 [2024-04-15 18:18:26.276445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.477 [2024-04-15 18:18:26.276463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.477 [2024-04-15 18:18:26.276699] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.477 [2024-04-15 18:18:26.276941] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.477 [2024-04-15 18:18:26.276964] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.477 [2024-04-15 18:18:26.276980] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.477 [2024-04-15 18:18:26.280536] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.477 [2024-04-15 18:18:26.289752] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.477 [2024-04-15 18:18:26.290189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.290359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.290388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.478 [2024-04-15 18:18:26.290406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.478 [2024-04-15 18:18:26.290643] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.478 [2024-04-15 18:18:26.290885] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.478 [2024-04-15 18:18:26.290909] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.478 [2024-04-15 18:18:26.290925] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.478 [2024-04-15 18:18:26.294479] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.478 [2024-04-15 18:18:26.303695] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.478 [2024-04-15 18:18:26.304162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.304354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.304382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.478 [2024-04-15 18:18:26.304400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.478 [2024-04-15 18:18:26.304637] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.478 [2024-04-15 18:18:26.304878] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.478 [2024-04-15 18:18:26.304901] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.478 [2024-04-15 18:18:26.304924] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.478 [2024-04-15 18:18:26.308488] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.478 [2024-04-15 18:18:26.317718] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.478 [2024-04-15 18:18:26.318235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.318494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.318523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.478 [2024-04-15 18:18:26.318541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.478 [2024-04-15 18:18:26.318778] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.478 [2024-04-15 18:18:26.319019] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.478 [2024-04-15 18:18:26.319043] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.478 [2024-04-15 18:18:26.319068] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.478 [2024-04-15 18:18:26.322645] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.478 [2024-04-15 18:18:26.331661] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.478 [2024-04-15 18:18:26.332131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.332361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.332390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.478 [2024-04-15 18:18:26.332407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.478 [2024-04-15 18:18:26.332645] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.478 [2024-04-15 18:18:26.332886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.478 [2024-04-15 18:18:26.332910] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.478 [2024-04-15 18:18:26.332925] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.478 [2024-04-15 18:18:26.336488] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.478 [2024-04-15 18:18:26.345500] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.478 [2024-04-15 18:18:26.346137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.346336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.346367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.478 [2024-04-15 18:18:26.346386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.478 [2024-04-15 18:18:26.346630] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.478 [2024-04-15 18:18:26.346873] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.478 [2024-04-15 18:18:26.346897] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.478 [2024-04-15 18:18:26.346913] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.478 [2024-04-15 18:18:26.350478] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.478 [2024-04-15 18:18:26.359495] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.478 [2024-04-15 18:18:26.360048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.360332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.360361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.478 [2024-04-15 18:18:26.360379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.478 [2024-04-15 18:18:26.360616] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.478 [2024-04-15 18:18:26.360858] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.478 [2024-04-15 18:18:26.360883] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.478 [2024-04-15 18:18:26.360898] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.478 [2024-04-15 18:18:26.364463] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.478 [2024-04-15 18:18:26.373483] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.478 [2024-04-15 18:18:26.374108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.374413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.374445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.478 [2024-04-15 18:18:26.374463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.478 [2024-04-15 18:18:26.374707] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.478 [2024-04-15 18:18:26.374951] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.478 [2024-04-15 18:18:26.374975] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.478 [2024-04-15 18:18:26.374990] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.478 [2024-04-15 18:18:26.378554] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.478 [2024-04-15 18:18:26.387363] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.478 [2024-04-15 18:18:26.387871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.388092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.388122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.478 [2024-04-15 18:18:26.388140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.478 [2024-04-15 18:18:26.388378] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.478 [2024-04-15 18:18:26.388620] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.478 [2024-04-15 18:18:26.388644] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.478 [2024-04-15 18:18:26.388659] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.478 [2024-04-15 18:18:26.392221] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.478 [2024-04-15 18:18:26.401245] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.478 [2024-04-15 18:18:26.401760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.402000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.402029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.478 [2024-04-15 18:18:26.402047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.478 [2024-04-15 18:18:26.402295] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.478 [2024-04-15 18:18:26.402537] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.478 [2024-04-15 18:18:26.402561] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.478 [2024-04-15 18:18:26.402576] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.478 [2024-04-15 18:18:26.406134] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.478 [2024-04-15 18:18:26.415146] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.478 [2024-04-15 18:18:26.415642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.415868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.478 [2024-04-15 18:18:26.415897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.478 [2024-04-15 18:18:26.415914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.478 [2024-04-15 18:18:26.416165] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.479 [2024-04-15 18:18:26.416407] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.479 [2024-04-15 18:18:26.416431] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.479 [2024-04-15 18:18:26.416446] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.479 [2024-04-15 18:18:26.419996] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.479 [2024-04-15 18:18:26.429072] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.739 [2024-04-15 18:18:26.429495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.739 [2024-04-15 18:18:26.429692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.739 [2024-04-15 18:18:26.429722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.739 [2024-04-15 18:18:26.429740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.739 [2024-04-15 18:18:26.429978] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.739 [2024-04-15 18:18:26.430233] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.739 [2024-04-15 18:18:26.430258] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.739 [2024-04-15 18:18:26.430273] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.739 [2024-04-15 18:18:26.433830] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.739 [2024-04-15 18:18:26.443159] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.739 [2024-04-15 18:18:26.443676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.443835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.443865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.740 [2024-04-15 18:18:26.443883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.740 [2024-04-15 18:18:26.444132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.740 [2024-04-15 18:18:26.444400] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.740 [2024-04-15 18:18:26.444425] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.740 [2024-04-15 18:18:26.444440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.740 [2024-04-15 18:18:26.448024] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.740 [2024-04-15 18:18:26.457037] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.740 [2024-04-15 18:18:26.457521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.457763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.457792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.740 [2024-04-15 18:18:26.457810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.740 [2024-04-15 18:18:26.458047] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.740 [2024-04-15 18:18:26.458299] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.740 [2024-04-15 18:18:26.458324] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.740 [2024-04-15 18:18:26.458339] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.740 [2024-04-15 18:18:26.461901] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.740 [2024-04-15 18:18:26.470989] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.740 [2024-04-15 18:18:26.471467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.471705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.471734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.740 [2024-04-15 18:18:26.471753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.740 [2024-04-15 18:18:26.471990] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.740 [2024-04-15 18:18:26.472244] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.740 [2024-04-15 18:18:26.472269] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.740 [2024-04-15 18:18:26.472284] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.740 [2024-04-15 18:18:26.475836] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.740 [2024-04-15 18:18:26.484850] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.740 [2024-04-15 18:18:26.485356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.485652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.485688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.740 [2024-04-15 18:18:26.485706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.740 [2024-04-15 18:18:26.485944] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.740 [2024-04-15 18:18:26.486198] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.740 [2024-04-15 18:18:26.486223] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.740 [2024-04-15 18:18:26.486238] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.740 [2024-04-15 18:18:26.489787] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.740 [2024-04-15 18:18:26.498808] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.740 [2024-04-15 18:18:26.499247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.499443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.499472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.740 [2024-04-15 18:18:26.499490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.740 [2024-04-15 18:18:26.499727] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.740 [2024-04-15 18:18:26.499969] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.740 [2024-04-15 18:18:26.499992] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.740 [2024-04-15 18:18:26.500008] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.740 [2024-04-15 18:18:26.503568] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.740 [2024-04-15 18:18:26.512807] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.740 [2024-04-15 18:18:26.513322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.513600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.513629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.740 [2024-04-15 18:18:26.513646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.740 [2024-04-15 18:18:26.513883] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.740 [2024-04-15 18:18:26.514138] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.740 [2024-04-15 18:18:26.514162] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.740 [2024-04-15 18:18:26.514178] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.740 [2024-04-15 18:18:26.517725] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.740 [2024-04-15 18:18:26.526745] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.740 [2024-04-15 18:18:26.527302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.527616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.527645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.740 [2024-04-15 18:18:26.527672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.740 [2024-04-15 18:18:26.527911] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.740 [2024-04-15 18:18:26.528161] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.740 [2024-04-15 18:18:26.528185] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.740 [2024-04-15 18:18:26.528201] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.740 [2024-04-15 18:18:26.531750] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.740 [2024-04-15 18:18:26.540767] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.740 [2024-04-15 18:18:26.541289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.541538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.541567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.740 [2024-04-15 18:18:26.541585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.740 [2024-04-15 18:18:26.541822] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.740 [2024-04-15 18:18:26.542072] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.740 [2024-04-15 18:18:26.542097] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.740 [2024-04-15 18:18:26.542112] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.740 [2024-04-15 18:18:26.545662] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.740 [2024-04-15 18:18:26.554666] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.740 [2024-04-15 18:18:26.555149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.555342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.555371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.740 [2024-04-15 18:18:26.555389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.740 [2024-04-15 18:18:26.555627] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.740 [2024-04-15 18:18:26.555868] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.740 [2024-04-15 18:18:26.555892] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.740 [2024-04-15 18:18:26.555907] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.740 [2024-04-15 18:18:26.559469] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.740 [2024-04-15 18:18:26.568690] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.740 [2024-04-15 18:18:26.569193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.569450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.740 [2024-04-15 18:18:26.569478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.740 [2024-04-15 18:18:26.569496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.741 [2024-04-15 18:18:26.569739] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.741 [2024-04-15 18:18:26.569982] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.741 [2024-04-15 18:18:26.570005] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.741 [2024-04-15 18:18:26.570021] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.741 [2024-04-15 18:18:26.573580] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.741 [2024-04-15 18:18:26.582587] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.741 [2024-04-15 18:18:26.583088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.741 [2024-04-15 18:18:26.583347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.741 [2024-04-15 18:18:26.583376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.741 [2024-04-15 18:18:26.583394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.741 [2024-04-15 18:18:26.583630] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.741 [2024-04-15 18:18:26.583872] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.741 [2024-04-15 18:18:26.583895] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.741 [2024-04-15 18:18:26.583911] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.741 [2024-04-15 18:18:26.587468] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.741 [2024-04-15 18:18:26.596473] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.741 [2024-04-15 18:18:26.596957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.741 [2024-04-15 18:18:26.597146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.741 [2024-04-15 18:18:26.597176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.741 [2024-04-15 18:18:26.597194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.741 [2024-04-15 18:18:26.597430] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.741 [2024-04-15 18:18:26.597672] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.741 [2024-04-15 18:18:26.597696] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.741 [2024-04-15 18:18:26.597711] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.741 [2024-04-15 18:18:26.601268] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.741 [2024-04-15 18:18:26.610488] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.741 [2024-04-15 18:18:26.610936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.741 [2024-04-15 18:18:26.611160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.741 [2024-04-15 18:18:26.611190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.741 [2024-04-15 18:18:26.611208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.741 [2024-04-15 18:18:26.611445] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.741 [2024-04-15 18:18:26.611692] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.741 [2024-04-15 18:18:26.611716] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.741 [2024-04-15 18:18:26.611732] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.741 [2024-04-15 18:18:26.615287] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.741 [2024-04-15 18:18:26.624294] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.741 [2024-04-15 18:18:26.624792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.741 [2024-04-15 18:18:26.624984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.741 [2024-04-15 18:18:26.625013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.741 [2024-04-15 18:18:26.625030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.741 [2024-04-15 18:18:26.625278] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.741 [2024-04-15 18:18:26.625519] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.741 [2024-04-15 18:18:26.625543] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.741 [2024-04-15 18:18:26.625559] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.741 [2024-04-15 18:18:26.629113] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.741 [2024-04-15 18:18:26.638127] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.741 [2024-04-15 18:18:26.638638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.741 [2024-04-15 18:18:26.638883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.741 [2024-04-15 18:18:26.638912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.741 [2024-04-15 18:18:26.638930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.741 [2024-04-15 18:18:26.639178] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.741 [2024-04-15 18:18:26.639421] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.741 [2024-04-15 18:18:26.639445] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.741 [2024-04-15 18:18:26.639461] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.741 [2024-04-15 18:18:26.643008] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.741 [2024-04-15 18:18:26.652009] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.741 [2024-04-15 18:18:26.652498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.741 [2024-04-15 18:18:26.652671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.741 [2024-04-15 18:18:26.652700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.741 [2024-04-15 18:18:26.652718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.741 [2024-04-15 18:18:26.652955] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.741 [2024-04-15 18:18:26.653207] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.741 [2024-04-15 18:18:26.653238] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.741 [2024-04-15 18:18:26.653255] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.741 [2024-04-15 18:18:26.656801] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.741 [2024-04-15 18:18:26.666029] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.741 [2024-04-15 18:18:26.666450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.741 [2024-04-15 18:18:26.666673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.741 [2024-04-15 18:18:26.666703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.741 [2024-04-15 18:18:26.666721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.741 [2024-04-15 18:18:26.666958] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.741 [2024-04-15 18:18:26.667210] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.741 [2024-04-15 18:18:26.667235] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.741 [2024-04-15 18:18:26.667250] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.741 [2024-04-15 18:18:26.670805] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.741 [2024-04-15 18:18:26.680039] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:37.741 [2024-04-15 18:18:26.680472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.741 [2024-04-15 18:18:26.680646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.741 [2024-04-15 18:18:26.680675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:37.741 [2024-04-15 18:18:26.680693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:37.741 [2024-04-15 18:18:26.680930] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:37.741 [2024-04-15 18:18:26.681183] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:37.741 [2024-04-15 18:18:26.681207] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:37.741 [2024-04-15 18:18:26.681224] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:37.741 [2024-04-15 18:18:26.684772] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.001 [2024-04-15 18:18:26.693986] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.001 [2024-04-15 18:18:26.694473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.001 [2024-04-15 18:18:26.694673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.001 [2024-04-15 18:18:26.694702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.001 [2024-04-15 18:18:26.694721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.001 [2024-04-15 18:18:26.694959] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.001 [2024-04-15 18:18:26.695212] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.001 [2024-04-15 18:18:26.695237] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.001 [2024-04-15 18:18:26.695259] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.001 [2024-04-15 18:18:26.698830] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.001 [2024-04-15 18:18:26.707853] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.001 [2024-04-15 18:18:26.708277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.001 [2024-04-15 18:18:26.708506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.001 [2024-04-15 18:18:26.708535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.001 [2024-04-15 18:18:26.708553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.001 [2024-04-15 18:18:26.708790] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.001 [2024-04-15 18:18:26.709032] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.001 [2024-04-15 18:18:26.709055] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.001 [2024-04-15 18:18:26.709083] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.001 [2024-04-15 18:18:26.712653] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.001 [2024-04-15 18:18:26.721668] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.001 [2024-04-15 18:18:26.722106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.001 [2024-04-15 18:18:26.722287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.001 [2024-04-15 18:18:26.722316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.001 [2024-04-15 18:18:26.722334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.001 [2024-04-15 18:18:26.722571] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.001 [2024-04-15 18:18:26.722813] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.001 [2024-04-15 18:18:26.722836] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.001 [2024-04-15 18:18:26.722851] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.001 [2024-04-15 18:18:26.726412] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.001 [2024-04-15 18:18:26.735652] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.001 [2024-04-15 18:18:26.736165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.001 [2024-04-15 18:18:26.736320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.001 [2024-04-15 18:18:26.736348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.001 [2024-04-15 18:18:26.736366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.001 [2024-04-15 18:18:26.736604] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.001 [2024-04-15 18:18:26.736846] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.001 [2024-04-15 18:18:26.736870] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.001 [2024-04-15 18:18:26.736885] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.001 [2024-04-15 18:18:26.740451] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.001 [2024-04-15 18:18:26.749473] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.001 [2024-04-15 18:18:26.749940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.001 [2024-04-15 18:18:26.750142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.001 [2024-04-15 18:18:26.750172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.001 [2024-04-15 18:18:26.750189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.001 [2024-04-15 18:18:26.750426] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.001 [2024-04-15 18:18:26.750668] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.001 [2024-04-15 18:18:26.750692] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.001 [2024-04-15 18:18:26.750707] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.001 [2024-04-15 18:18:26.754272] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.002 [2024-04-15 18:18:26.763497] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.002 [2024-04-15 18:18:26.763913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.764113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.764143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.002 [2024-04-15 18:18:26.764161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.002 [2024-04-15 18:18:26.764398] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.002 [2024-04-15 18:18:26.764640] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.002 [2024-04-15 18:18:26.764664] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.002 [2024-04-15 18:18:26.764679] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.002 [2024-04-15 18:18:26.768249] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.002 [2024-04-15 18:18:26.777471] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.002 [2024-04-15 18:18:26.777956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.778154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.778184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.002 [2024-04-15 18:18:26.778202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.002 [2024-04-15 18:18:26.778440] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.002 [2024-04-15 18:18:26.778681] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.002 [2024-04-15 18:18:26.778705] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.002 [2024-04-15 18:18:26.778720] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.002 [2024-04-15 18:18:26.782283] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.002 [2024-04-15 18:18:26.791302] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.002 [2024-04-15 18:18:26.791767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.791950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.791979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.002 [2024-04-15 18:18:26.791996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.002 [2024-04-15 18:18:26.792242] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.002 [2024-04-15 18:18:26.792485] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.002 [2024-04-15 18:18:26.792508] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.002 [2024-04-15 18:18:26.792524] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.002 [2024-04-15 18:18:26.796081] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.002 [2024-04-15 18:18:26.805308] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.002 [2024-04-15 18:18:26.805739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.805929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.805958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.002 [2024-04-15 18:18:26.805975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.002 [2024-04-15 18:18:26.806223] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.002 [2024-04-15 18:18:26.806465] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.002 [2024-04-15 18:18:26.806488] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.002 [2024-04-15 18:18:26.806503] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.002 [2024-04-15 18:18:26.810069] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.002 [2024-04-15 18:18:26.819286] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.002 [2024-04-15 18:18:26.819803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.820155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.820184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.002 [2024-04-15 18:18:26.820202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.002 [2024-04-15 18:18:26.820439] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.002 [2024-04-15 18:18:26.820681] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.002 [2024-04-15 18:18:26.820705] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.002 [2024-04-15 18:18:26.820720] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.002 [2024-04-15 18:18:26.824273] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.002 [2024-04-15 18:18:26.833278] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.002 [2024-04-15 18:18:26.833770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.834040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.834077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.002 [2024-04-15 18:18:26.834097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.002 [2024-04-15 18:18:26.834334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.002 [2024-04-15 18:18:26.834576] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.002 [2024-04-15 18:18:26.834600] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.002 [2024-04-15 18:18:26.834615] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.002 [2024-04-15 18:18:26.838165] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.002 [2024-04-15 18:18:26.847198] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.002 [2024-04-15 18:18:26.847692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.847892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.847931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.002 [2024-04-15 18:18:26.847949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.002 [2024-04-15 18:18:26.848197] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.002 [2024-04-15 18:18:26.848439] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.002 [2024-04-15 18:18:26.848463] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.002 [2024-04-15 18:18:26.848478] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.002 [2024-04-15 18:18:26.852026] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.002 [2024-04-15 18:18:26.861030] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.002 [2024-04-15 18:18:26.861524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.861785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.861813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.002 [2024-04-15 18:18:26.861830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.002 [2024-04-15 18:18:26.862077] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.002 [2024-04-15 18:18:26.862320] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.002 [2024-04-15 18:18:26.862344] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.002 [2024-04-15 18:18:26.862359] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.002 [2024-04-15 18:18:26.865908] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.002 [2024-04-15 18:18:26.874924] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.002 [2024-04-15 18:18:26.875525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.875822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.875854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.002 [2024-04-15 18:18:26.875872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.002 [2024-04-15 18:18:26.876130] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.002 [2024-04-15 18:18:26.876375] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.002 [2024-04-15 18:18:26.876399] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.002 [2024-04-15 18:18:26.876414] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.002 [2024-04-15 18:18:26.879963] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.002 [2024-04-15 18:18:26.888767] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.002 [2024-04-15 18:18:26.889274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.002 [2024-04-15 18:18:26.889529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.003 [2024-04-15 18:18:26.889558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.003 [2024-04-15 18:18:26.889576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.003 [2024-04-15 18:18:26.889814] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.003 [2024-04-15 18:18:26.890055] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.003 [2024-04-15 18:18:26.890089] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.003 [2024-04-15 18:18:26.890104] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.003 [2024-04-15 18:18:26.893653] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.003 [2024-04-15 18:18:26.902659] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.003 [2024-04-15 18:18:26.903213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.003 [2024-04-15 18:18:26.903437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.003 [2024-04-15 18:18:26.903466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.003 [2024-04-15 18:18:26.903484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.003 [2024-04-15 18:18:26.903721] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.003 [2024-04-15 18:18:26.903964] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.003 [2024-04-15 18:18:26.903987] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.003 [2024-04-15 18:18:26.904003] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.003 [2024-04-15 18:18:26.907563] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.003 [2024-04-15 18:18:26.916568] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.003 [2024-04-15 18:18:26.917083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.003 [2024-04-15 18:18:26.917343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.003 [2024-04-15 18:18:26.917372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.003 [2024-04-15 18:18:26.917397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.003 [2024-04-15 18:18:26.917635] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.003 [2024-04-15 18:18:26.917878] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.003 [2024-04-15 18:18:26.917902] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.003 [2024-04-15 18:18:26.917917] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3458244 Killed "${NVMF_APP[@]}" "$@" 00:31:38.003 18:18:26 -- host/bdevperf.sh@36 -- # tgt_init 00:31:38.003 18:18:26 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:38.003 18:18:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:38.003 18:18:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:38.003 18:18:26 -- common/autotest_common.sh@10 -- # set +x 00:31:38.003 [2024-04-15 18:18:26.921479] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.003 18:18:26 -- nvmf/common.sh@470 -- # nvmfpid=3459199 00:31:38.003 18:18:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:38.003 18:18:26 -- nvmf/common.sh@471 -- # waitforlisten 3459199 00:31:38.003 18:18:26 -- common/autotest_common.sh@817 -- # '[' -z 3459199 ']' 00:31:38.003 18:18:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.003 18:18:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:38.003 18:18:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.003 18:18:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:38.003 18:18:26 -- common/autotest_common.sh@10 -- # set +x 00:31:38.003 [2024-04-15 18:18:26.930500] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.003 [2024-04-15 18:18:26.930906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.003 [2024-04-15 18:18:26.931120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.003 [2024-04-15 18:18:26.931150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.003 [2024-04-15 18:18:26.931168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.003 [2024-04-15 18:18:26.931404] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.003 [2024-04-15 18:18:26.931647] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.003 [2024-04-15 18:18:26.931671] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.003 [2024-04-15 18:18:26.931686] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.003 [2024-04-15 18:18:26.935242] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.003 [2024-04-15 18:18:26.944622] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.003 [2024-04-15 18:18:26.945065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.003 [2024-04-15 18:18:26.945253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.003 [2024-04-15 18:18:26.945283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.003 [2024-04-15 18:18:26.945301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.003 [2024-04-15 18:18:26.945544] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.003 [2024-04-15 18:18:26.945787] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.003 [2024-04-15 18:18:26.945811] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.003 [2024-04-15 18:18:26.945826] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.003 [2024-04-15 18:18:26.949385] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.263 [2024-04-15 18:18:26.958483] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.263 [2024-04-15 18:18:26.958922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.263 [2024-04-15 18:18:26.959087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.263 [2024-04-15 18:18:26.959118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.263 [2024-04-15 18:18:26.959136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.263 [2024-04-15 18:18:26.959374] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.263 [2024-04-15 18:18:26.959626] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.263 [2024-04-15 18:18:26.959651] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.263 [2024-04-15 18:18:26.959666] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.263 [2024-04-15 18:18:26.963232] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.263 [2024-04-15 18:18:26.972455] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.263 [2024-04-15 18:18:26.972908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.263 [2024-04-15 18:18:26.973099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.263 [2024-04-15 18:18:26.973129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.264 [2024-04-15 18:18:26.973147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.264 [2024-04-15 18:18:26.973384] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.264 [2024-04-15 18:18:26.973625] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.264 [2024-04-15 18:18:26.973649] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.264 [2024-04-15 18:18:26.973664] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.264 [2024-04-15 18:18:26.977222] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.264 [2024-04-15 18:18:26.979253] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:31:38.264 [2024-04-15 18:18:26.979348] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:38.264 [2024-04-15 18:18:26.986444] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.264 [2024-04-15 18:18:26.986894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.264 [2024-04-15 18:18:26.987125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.264 [2024-04-15 18:18:26.987156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.264 [2024-04-15 18:18:26.987180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.264 [2024-04-15 18:18:26.987419] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.264 [2024-04-15 18:18:26.987662] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.264 [2024-04-15 18:18:26.987686] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.264 [2024-04-15 18:18:26.987702] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.264 [2024-04-15 18:18:26.991258] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.264 [2024-04-15 18:18:27.000276] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.264 [2024-04-15 18:18:27.000746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.264 [2024-04-15 18:18:27.000926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.264 [2024-04-15 18:18:27.000955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.264 [2024-04-15 18:18:27.000973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.264 [2024-04-15 18:18:27.001221] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.264 [2024-04-15 18:18:27.001463] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.264 [2024-04-15 18:18:27.001487] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.264 [2024-04-15 18:18:27.001503] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.264 [2024-04-15 18:18:27.005051] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.264 [2024-04-15 18:18:27.014280] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.264 [2024-04-15 18:18:27.014708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.264 [2024-04-15 18:18:27.014902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.264 [2024-04-15 18:18:27.014931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.264 [2024-04-15 18:18:27.014949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.264 [2024-04-15 18:18:27.015197] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.264 [2024-04-15 18:18:27.015440] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.264 [2024-04-15 18:18:27.015464] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.264 [2024-04-15 18:18:27.015480] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.264 [2024-04-15 18:18:27.019030] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.264 [2024-04-15 18:18:27.028251] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.264 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.264 [2024-04-15 18:18:27.028706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.264 [2024-04-15 18:18:27.028903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.264 [2024-04-15 18:18:27.028932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.264 [2024-04-15 18:18:27.028955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.264 [2024-04-15 18:18:27.029203] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.264 [2024-04-15 18:18:27.029445] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.264 [2024-04-15 18:18:27.029469] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.264 [2024-04-15 18:18:27.029484] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.264 [2024-04-15 18:18:27.033255] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.264 [2024-04-15 18:18:27.042064] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.264 [2024-04-15 18:18:27.042479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.264 [2024-04-15 18:18:27.042683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.264 [2024-04-15 18:18:27.042712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.264 [2024-04-15 18:18:27.042729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.264 [2024-04-15 18:18:27.042967] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.264 [2024-04-15 18:18:27.043221] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.264 [2024-04-15 18:18:27.043246] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.264 [2024-04-15 18:18:27.043262] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.264 [2024-04-15 18:18:27.046809] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.264 [2024-04-15 18:18:27.056033] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.264 [2024-04-15 18:18:27.056482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.264 [2024-04-15 18:18:27.056670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.264 [2024-04-15 18:18:27.056699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.264 [2024-04-15 18:18:27.056716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.264 [2024-04-15 18:18:27.056953] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.264 [2024-04-15 18:18:27.057205] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.264 [2024-04-15 18:18:27.057229] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.264 [2024-04-15 18:18:27.057245] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.264 [2024-04-15 18:18:27.060795] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.264 [2024-04-15 18:18:27.068770] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:38.264 [2024-04-15 18:18:27.070027] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.264 [2024-04-15 18:18:27.070441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.264 [2024-04-15 18:18:27.070616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.264 [2024-04-15 18:18:27.070645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.264 [2024-04-15 18:18:27.070662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.264 [2024-04-15 18:18:27.070906] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.264 [2024-04-15 18:18:27.071160] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.264 [2024-04-15 18:18:27.071184] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.264 [2024-04-15 18:18:27.071199] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.264 [2024-04-15 18:18:27.074758] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.264 [2024-04-15 18:18:27.084013] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.264 [2024-04-15 18:18:27.084557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.264 [2024-04-15 18:18:27.084740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.264 [2024-04-15 18:18:27.084769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.264 [2024-04-15 18:18:27.084792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.264 [2024-04-15 18:18:27.085040] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.264 [2024-04-15 18:18:27.085299] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.264 [2024-04-15 18:18:27.085324] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.264 [2024-04-15 18:18:27.085345] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.264 [2024-04-15 18:18:27.088893] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.264 [2024-04-15 18:18:27.097912] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.264 [2024-04-15 18:18:27.098332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.264 [2024-04-15 18:18:27.098506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.264 [2024-04-15 18:18:27.098535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.265 [2024-04-15 18:18:27.098553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.265 [2024-04-15 18:18:27.098791] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.265 [2024-04-15 18:18:27.099034] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.265 [2024-04-15 18:18:27.099066] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.265 [2024-04-15 18:18:27.099086] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.265 [2024-04-15 18:18:27.102634] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.265 [2024-04-15 18:18:27.111852] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.265 [2024-04-15 18:18:27.112276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.265 [2024-04-15 18:18:27.112483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.265 [2024-04-15 18:18:27.112512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.265 [2024-04-15 18:18:27.112530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.265 [2024-04-15 18:18:27.112768] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.265 [2024-04-15 18:18:27.113023] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.265 [2024-04-15 18:18:27.113047] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.265 [2024-04-15 18:18:27.113074] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.265 [2024-04-15 18:18:27.116635] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.265 [2024-04-15 18:18:27.125887] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.265 [2024-04-15 18:18:27.126398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.265 [2024-04-15 18:18:27.126605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.265 [2024-04-15 18:18:27.126635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.265 [2024-04-15 18:18:27.126657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.265 [2024-04-15 18:18:27.126905] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.265 [2024-04-15 18:18:27.127165] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.265 [2024-04-15 18:18:27.127191] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.265 [2024-04-15 18:18:27.127210] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.265 [2024-04-15 18:18:27.130766] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.265 [2024-04-15 18:18:27.139781] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.265 [2024-04-15 18:18:27.140214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.265 [2024-04-15 18:18:27.140416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.265 [2024-04-15 18:18:27.140445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.265 [2024-04-15 18:18:27.140464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.265 [2024-04-15 18:18:27.140702] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.265 [2024-04-15 18:18:27.140945] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.265 [2024-04-15 18:18:27.140969] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.265 [2024-04-15 18:18:27.140985] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.265 [2024-04-15 18:18:27.144546] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.265 [2024-04-15 18:18:27.153771] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.265 [2024-04-15 18:18:27.154201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.265 [2024-04-15 18:18:27.154368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.265 [2024-04-15 18:18:27.154397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.265 [2024-04-15 18:18:27.154415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.265 [2024-04-15 18:18:27.154654] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.265 [2024-04-15 18:18:27.154896] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.265 [2024-04-15 18:18:27.154936] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.265 [2024-04-15 18:18:27.154953] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.265 [2024-04-15 18:18:27.158513] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.265 [2024-04-15 18:18:27.161551] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:38.265 [2024-04-15 18:18:27.161595] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:38.265 [2024-04-15 18:18:27.161613] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:38.265 [2024-04-15 18:18:27.161628] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:38.265 [2024-04-15 18:18:27.161640] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:38.265 [2024-04-15 18:18:27.161711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:38.265 [2024-04-15 18:18:27.161904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:38.265 [2024-04-15 18:18:27.161908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.265 [2024-04-15 18:18:27.167736] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.265 [2024-04-15 18:18:27.168182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.265 [2024-04-15 18:18:27.168381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.265 [2024-04-15 18:18:27.168410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.265 [2024-04-15 18:18:27.168431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.265 [2024-04-15 18:18:27.168676] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.265 [2024-04-15 18:18:27.168922] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.265 [2024-04-15 18:18:27.168947] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.265 [2024-04-15 18:18:27.168966] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.265 [2024-04-15 18:18:27.172540] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.265 [2024-04-15 18:18:27.181783] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.265 [2024-04-15 18:18:27.182299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.265 [2024-04-15 18:18:27.182462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.265 [2024-04-15 18:18:27.182492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.265 [2024-04-15 18:18:27.182514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.265 [2024-04-15 18:18:27.182762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.265 [2024-04-15 18:18:27.183012] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.265 [2024-04-15 18:18:27.183037] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.265 [2024-04-15 18:18:27.183056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.265 [2024-04-15 18:18:27.186624] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.265 [2024-04-15 18:18:27.195836] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.265 [2024-04-15 18:18:27.196396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.265 [2024-04-15 18:18:27.196622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.265 [2024-04-15 18:18:27.196652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.265 [2024-04-15 18:18:27.196675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.265 [2024-04-15 18:18:27.196923] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.265 [2024-04-15 18:18:27.197182] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.265 [2024-04-15 18:18:27.197208] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.265 [2024-04-15 18:18:27.197227] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.265 [2024-04-15 18:18:27.200784] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.265 [2024-04-15 18:18:27.209827] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.265 [2024-04-15 18:18:27.210402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.265 [2024-04-15 18:18:27.210640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.265 [2024-04-15 18:18:27.210670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.265 [2024-04-15 18:18:27.210693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.265 [2024-04-15 18:18:27.210948] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.265 [2024-04-15 18:18:27.211210] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.265 [2024-04-15 18:18:27.211235] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.265 [2024-04-15 18:18:27.211255] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.265 [2024-04-15 18:18:27.214839] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.525 [2024-04-15 18:18:27.223709] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.525 [2024-04-15 18:18:27.224204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.525 [2024-04-15 18:18:27.224409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.525 [2024-04-15 18:18:27.224438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.525 [2024-04-15 18:18:27.224459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.525 [2024-04-15 18:18:27.224703] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.525 [2024-04-15 18:18:27.224950] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.525 [2024-04-15 18:18:27.224974] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.525 [2024-04-15 18:18:27.224994] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.525 [2024-04-15 18:18:27.228561] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.525 [2024-04-15 18:18:27.237595] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.525 [2024-04-15 18:18:27.238149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.525 [2024-04-15 18:18:27.238388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.525 [2024-04-15 18:18:27.238417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.525 [2024-04-15 18:18:27.238440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.525 [2024-04-15 18:18:27.238688] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.525 [2024-04-15 18:18:27.238937] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.525 [2024-04-15 18:18:27.238962] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.525 [2024-04-15 18:18:27.238981] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.525 [2024-04-15 18:18:27.242544] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.525 [2024-04-15 18:18:27.251563] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.525 [2024-04-15 18:18:27.251990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.525 [2024-04-15 18:18:27.252180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.525 [2024-04-15 18:18:27.252212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.525 [2024-04-15 18:18:27.252231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.525 [2024-04-15 18:18:27.252470] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.525 [2024-04-15 18:18:27.252712] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.525 [2024-04-15 18:18:27.252736] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.525 [2024-04-15 18:18:27.252752] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.525 [2024-04-15 18:18:27.256312] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.525 [2024-04-15 18:18:27.265534] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.525 [2024-04-15 18:18:27.265956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.525 [2024-04-15 18:18:27.266173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.525 [2024-04-15 18:18:27.266204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.525 [2024-04-15 18:18:27.266222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.525 [2024-04-15 18:18:27.266461] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.525 [2024-04-15 18:18:27.266703] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.525 [2024-04-15 18:18:27.266727] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.525 [2024-04-15 18:18:27.266743] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.525 [2024-04-15 18:18:27.270308] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.525 18:18:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:38.525 18:18:27 -- common/autotest_common.sh@850 -- # return 0 00:31:38.525 18:18:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:38.525 18:18:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:38.525 18:18:27 -- common/autotest_common.sh@10 -- # set +x 00:31:38.525 [2024-04-15 18:18:27.279533] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.525 [2024-04-15 18:18:27.279983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.525 [2024-04-15 18:18:27.280195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.525 [2024-04-15 18:18:27.280225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.525 [2024-04-15 18:18:27.280243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.526 [2024-04-15 18:18:27.280480] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.526 [2024-04-15 18:18:27.280723] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.526 [2024-04-15 18:18:27.280747] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.526 [2024-04-15 18:18:27.280763] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.526 [2024-04-15 18:18:27.284326] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.526 [2024-04-15 18:18:27.293554] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.526 [2024-04-15 18:18:27.293996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.526 [2024-04-15 18:18:27.294171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.526 [2024-04-15 18:18:27.294201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.526 [2024-04-15 18:18:27.294220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.526 [2024-04-15 18:18:27.294458] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.526 [2024-04-15 18:18:27.294700] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.526 [2024-04-15 18:18:27.294724] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.526 [2024-04-15 18:18:27.294739] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.526 [2024-04-15 18:18:27.298298] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.526 18:18:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:38.526 18:18:27 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:38.526 18:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.526 18:18:27 -- common/autotest_common.sh@10 -- # set +x 00:31:38.526 [2024-04-15 18:18:27.303228] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:38.526 [2024-04-15 18:18:27.307563] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.526 [2024-04-15 18:18:27.308019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.526 [2024-04-15 18:18:27.308204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.526 [2024-04-15 18:18:27.308233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.526 [2024-04-15 18:18:27.308251] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.526 [2024-04-15 18:18:27.308489] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.526 [2024-04-15 18:18:27.308731] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.526 [2024-04-15 18:18:27.308755] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.526 [2024-04-15 18:18:27.308770] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.526 18:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.526 18:18:27 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:38.526 18:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.526 18:18:27 -- common/autotest_common.sh@10 -- # set +x 00:31:38.526 [2024-04-15 18:18:27.312336] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.526 [2024-04-15 18:18:27.321554] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.526 [2024-04-15 18:18:27.322006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.526 [2024-04-15 18:18:27.322151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.526 [2024-04-15 18:18:27.322181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.526 [2024-04-15 18:18:27.322198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.526 [2024-04-15 18:18:27.322435] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.526 [2024-04-15 18:18:27.322677] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.526 [2024-04-15 18:18:27.322700] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.526 [2024-04-15 18:18:27.322716] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.526 [2024-04-15 18:18:27.326271] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.526 [2024-04-15 18:18:27.335496] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.526 [2024-04-15 18:18:27.335939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.526 [2024-04-15 18:18:27.336149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.526 [2024-04-15 18:18:27.336179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.526 [2024-04-15 18:18:27.336199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.526 [2024-04-15 18:18:27.336439] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.526 [2024-04-15 18:18:27.336683] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.526 [2024-04-15 18:18:27.336707] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.526 [2024-04-15 18:18:27.336724] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.526 [2024-04-15 18:18:27.340286] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.526 [2024-04-15 18:18:27.349528] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.526 [2024-04-15 18:18:27.350073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.526 [2024-04-15 18:18:27.350271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.526 [2024-04-15 18:18:27.350301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.526 [2024-04-15 18:18:27.350323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.526 [2024-04-15 18:18:27.350571] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.526 Malloc0 00:31:38.526 [2024-04-15 18:18:27.350819] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.526 [2024-04-15 18:18:27.350844] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.526 [2024-04-15 18:18:27.350871] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.526 18:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.526 18:18:27 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:38.526 18:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.526 18:18:27 -- common/autotest_common.sh@10 -- # set +x 00:31:38.526 [2024-04-15 18:18:27.354429] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.526 18:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.526 18:18:27 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:38.526 18:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.526 18:18:27 -- common/autotest_common.sh@10 -- # set +x 00:31:38.526 [2024-04-15 18:18:27.363436] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.526 [2024-04-15 18:18:27.363843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.526 [2024-04-15 18:18:27.364045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.526 [2024-04-15 18:18:27.364083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f010 with addr=10.0.0.2, port=4420 00:31:38.526 [2024-04-15 18:18:27.364102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f010 is same with the state(5) to be set 00:31:38.526 [2024-04-15 18:18:27.364339] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f010 (9): Bad file descriptor 00:31:38.526 [2024-04-15 18:18:27.364580] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:38.526 [2024-04-15 18:18:27.364605] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:38.526 [2024-04-15 18:18:27.364620] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:38.526 18:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.526 18:18:27 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:38.526 18:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.526 18:18:27 -- common/autotest_common.sh@10 -- # set +x 00:31:38.526 [2024-04-15 18:18:27.368175] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.526 [2024-04-15 18:18:27.370590] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.526 18:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.526 18:18:27 -- host/bdevperf.sh@38 -- # wait 3458464 00:31:38.526 [2024-04-15 18:18:27.377407] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:38.526 [2024-04-15 18:18:27.448822] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:48.529 00:31:48.529 Latency(us) 00:31:48.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.529 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:48.529 Verification LBA range: start 0x0 length 0x4000 00:31:48.529 Nvme1n1 : 15.01 6592.17 25.75 8467.76 0.00 8474.84 922.36 21165.70 00:31:48.529 =================================================================================================================== 00:31:48.529 Total : 6592.17 25.75 8467.76 0.00 8474.84 922.36 21165.70 00:31:48.529 18:18:36 -- host/bdevperf.sh@39 -- # sync 00:31:48.529 18:18:36 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:48.529 18:18:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.529 18:18:36 -- common/autotest_common.sh@10 -- # set +x 00:31:48.529 18:18:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.529 18:18:36 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:48.529 18:18:36 -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:48.529 18:18:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:48.529 18:18:36 -- nvmf/common.sh@117 -- # sync 00:31:48.529 18:18:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:48.529 18:18:36 -- nvmf/common.sh@120 -- # set +e 00:31:48.529 18:18:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:48.529 18:18:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:48.529 rmmod nvme_tcp 00:31:48.529 rmmod nvme_fabrics 00:31:48.529 rmmod nvme_keyring 00:31:48.529 18:18:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:48.529 18:18:36 -- nvmf/common.sh@124 -- # set -e 00:31:48.529 18:18:36 -- nvmf/common.sh@125 -- # return 0 00:31:48.529 18:18:36 -- nvmf/common.sh@478 -- # '[' -n 3459199 ']' 00:31:48.529 18:18:36 -- nvmf/common.sh@479 -- # killprocess 3459199 00:31:48.529 18:18:36 -- common/autotest_common.sh@936 -- # '[' -z 3459199 ']' 00:31:48.529 18:18:36 -- common/autotest_common.sh@940 -- # kill -0 3459199 00:31:48.529 18:18:36 -- common/autotest_common.sh@941 -- # uname 00:31:48.529 18:18:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:48.529 18:18:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3459199 00:31:48.529 18:18:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:48.529 18:18:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:48.529 18:18:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3459199' 00:31:48.529 killing process with pid 3459199 00:31:48.529 18:18:36 -- common/autotest_common.sh@955 -- # kill 3459199 00:31:48.529 18:18:36 -- common/autotest_common.sh@960 -- # wait 3459199 00:31:48.529 18:18:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:48.529 18:18:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:48.529 18:18:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:48.529 18:18:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:48.530 18:18:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:48.530 18:18:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.530 18:18:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:48.530 18:18:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.438 18:18:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:50.438 00:31:50.438 real 0m22.607s 00:31:50.438 user 0m59.540s 00:31:50.438 sys 0m4.792s 00:31:50.438 18:18:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:50.438 18:18:38 -- common/autotest_common.sh@10 -- # set +x 00:31:50.438 ************************************ 00:31:50.438 END TEST nvmf_bdevperf 00:31:50.438 ************************************ 00:31:50.439 18:18:39 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:50.439 18:18:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:50.439 18:18:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:50.439 18:18:39 -- common/autotest_common.sh@10 -- # set +x 00:31:50.439 ************************************ 00:31:50.439 START TEST nvmf_target_disconnect 00:31:50.439 ************************************ 00:31:50.439 18:18:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:50.439 * Looking for test storage... 00:31:50.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:50.439 18:18:39 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:50.439 18:18:39 -- nvmf/common.sh@7 -- # uname -s 00:31:50.439 18:18:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:50.439 18:18:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:50.439 18:18:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:50.439 18:18:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:50.439 18:18:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:50.439 18:18:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:50.439 18:18:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:50.439 18:18:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:50.439 18:18:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:50.439 18:18:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:50.439 18:18:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:50.439 18:18:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:50.439 18:18:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:50.439 18:18:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:50.439 18:18:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:50.439 18:18:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:50.439 18:18:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:50.439 18:18:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:50.439 18:18:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.439 18:18:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.439 18:18:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.439 18:18:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.439 18:18:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.439 18:18:39 -- paths/export.sh@5 -- # export PATH 00:31:50.439 18:18:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.439 18:18:39 -- nvmf/common.sh@47 -- # : 0 00:31:50.439 18:18:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:50.439 18:18:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:50.439 18:18:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:50.439 18:18:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:50.439 18:18:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:50.439 18:18:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:50.439 18:18:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:50.439 18:18:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:50.439 18:18:39 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:50.439 18:18:39 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:50.439 18:18:39 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:50.439 18:18:39 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:31:50.439 18:18:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:50.439 18:18:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:50.439 18:18:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:50.439 18:18:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:50.439 18:18:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:50.439 18:18:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.439 18:18:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:50.439 18:18:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.439 18:18:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:31:50.439 18:18:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:31:50.439 18:18:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:50.439 18:18:39 -- common/autotest_common.sh@10 -- # set +x 00:31:52.343 18:18:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:52.343 18:18:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:52.343 18:18:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:52.343 18:18:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:52.343 18:18:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:52.343 18:18:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:52.343 18:18:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:52.343 18:18:41 -- nvmf/common.sh@295 -- # net_devs=() 00:31:52.343 18:18:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:52.343 18:18:41 -- nvmf/common.sh@296 -- # e810=() 00:31:52.343 18:18:41 -- nvmf/common.sh@296 -- # local -ga e810 00:31:52.343 18:18:41 -- nvmf/common.sh@297 -- # x722=() 00:31:52.343 18:18:41 -- nvmf/common.sh@297 -- # local -ga x722 00:31:52.343 18:18:41 -- nvmf/common.sh@298 -- # mlx=() 00:31:52.343 18:18:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:52.343 18:18:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:52.343 18:18:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:52.344 18:18:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:52.344 18:18:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:52.344 18:18:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:52.344 18:18:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:52.344 18:18:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:52.344 18:18:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:52.344 18:18:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:52.344 18:18:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:52.344 18:18:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:52.344 18:18:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:52.344 18:18:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:52.344 18:18:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:52.344 18:18:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:52.344 18:18:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:52.344 18:18:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:52.344 18:18:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:52.344 18:18:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:52.344 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:52.344 18:18:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:52.344 18:18:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:52.344 18:18:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:52.344 18:18:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:52.344 18:18:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:52.344 18:18:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:52.344 18:18:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:52.344 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:52.344 18:18:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:52.344 18:18:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:52.344 18:18:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:52.344 18:18:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:52.344 18:18:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:52.344 18:18:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:52.344 18:18:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:52.344 18:18:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:52.344 18:18:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:52.344 18:18:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:52.344 18:18:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:52.344 18:18:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:52.344 18:18:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:52.344 Found net devices under 0000:84:00.0: cvl_0_0 00:31:52.344 18:18:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:52.344 18:18:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:52.344 18:18:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:52.344 18:18:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:52.344 18:18:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:52.344 18:18:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:52.344 Found net devices under 0000:84:00.1: cvl_0_1 00:31:52.344 18:18:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:52.344 18:18:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:31:52.344 18:18:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:31:52.344 18:18:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:31:52.344 18:18:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:31:52.344 18:18:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:31:52.344 18:18:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:52.344 18:18:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:52.344 18:18:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:52.344 18:18:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:52.344 18:18:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:52.344 18:18:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:52.344 18:18:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:52.344 18:18:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:52.344 18:18:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:52.344 18:18:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:52.603 18:18:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:52.603 18:18:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:52.603 18:18:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:52.603 18:18:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:52.603 18:18:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:52.603 18:18:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:52.603 18:18:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:52.603 18:18:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:52.603 18:18:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:52.603 18:18:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:52.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:52.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:31:52.603 00:31:52.603 --- 10.0.0.2 ping statistics --- 00:31:52.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:52.603 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:31:52.603 18:18:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:52.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:52.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:31:52.603 00:31:52.603 --- 10.0.0.1 ping statistics --- 00:31:52.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:52.603 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:31:52.603 18:18:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:52.603 18:18:41 -- nvmf/common.sh@411 -- # return 0 00:31:52.603 18:18:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:31:52.603 18:18:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:52.604 18:18:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:52.604 18:18:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:52.604 18:18:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:52.604 18:18:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:52.604 18:18:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:52.604 18:18:41 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:52.604 18:18:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:52.604 18:18:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:52.604 18:18:41 -- common/autotest_common.sh@10 -- # set +x 00:31:52.862 ************************************ 00:31:52.862 START TEST nvmf_target_disconnect_tc1 00:31:52.862 ************************************ 00:31:52.862 18:18:41 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:31:52.862 18:18:41 -- host/target_disconnect.sh@32 -- # set +e 00:31:52.862 18:18:41 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:52.862 EAL: No free 2048 kB hugepages reported on node 1 00:31:52.862 [2024-04-15 18:18:41.681939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.862 [2024-04-15 18:18:41.682255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.862 [2024-04-15 18:18:41.682304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d4cdd0 with addr=10.0.0.2, port=4420 00:31:52.862 [2024-04-15 18:18:41.682344] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:52.862 [2024-04-15 18:18:41.682368] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:52.862 [2024-04-15 18:18:41.682384] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:31:52.862 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:52.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:52.862 Initializing NVMe Controllers 00:31:52.862 18:18:41 -- host/target_disconnect.sh@33 -- # trap - ERR 00:31:52.862 18:18:41 -- host/target_disconnect.sh@33 -- # print_backtrace 00:31:52.862 18:18:41 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:31:52.862 18:18:41 -- common/autotest_common.sh@1139 -- # return 0 00:31:52.862 18:18:41 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:31:52.862 18:18:41 -- host/target_disconnect.sh@41 -- # set -e 00:31:52.862 00:31:52.862 real 0m0.107s 00:31:52.862 user 0m0.039s 00:31:52.862 sys 0m0.064s 00:31:52.862 18:18:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:52.862 18:18:41 -- common/autotest_common.sh@10 -- # set +x 00:31:52.862 ************************************ 00:31:52.862 END TEST nvmf_target_disconnect_tc1 00:31:52.862 ************************************ 00:31:52.862 18:18:41 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:52.862 18:18:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:52.862 18:18:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:52.862 18:18:41 -- common/autotest_common.sh@10 -- # set +x 00:31:53.120 ************************************ 00:31:53.120 START TEST nvmf_target_disconnect_tc2 00:31:53.120 ************************************ 00:31:53.120 18:18:41 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:31:53.120 18:18:41 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:31:53.120 18:18:41 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:53.120 18:18:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:53.120 18:18:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:53.120 18:18:41 -- common/autotest_common.sh@10 -- # set +x 00:31:53.120 18:18:41 -- nvmf/common.sh@470 -- # nvmfpid=3462365 00:31:53.120 18:18:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:53.120 18:18:41 -- nvmf/common.sh@471 -- # waitforlisten 3462365 00:31:53.120 18:18:41 -- common/autotest_common.sh@817 -- # '[' -z 3462365 ']' 00:31:53.120 18:18:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.120 18:18:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:53.120 18:18:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.120 18:18:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:53.120 18:18:41 -- common/autotest_common.sh@10 -- # set +x 00:31:53.120 [2024-04-15 18:18:41.886739] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:31:53.121 [2024-04-15 18:18:41.886828] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:53.121 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.121 [2024-04-15 18:18:41.963857] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:53.121 [2024-04-15 18:18:42.064066] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:53.121 [2024-04-15 18:18:42.064133] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:53.121 [2024-04-15 18:18:42.064150] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:53.121 [2024-04-15 18:18:42.064163] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:53.121 [2024-04-15 18:18:42.064176] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:53.121 [2024-04-15 18:18:42.064260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:53.121 [2024-04-15 18:18:42.064317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:53.121 [2024-04-15 18:18:42.064369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:53.121 [2024-04-15 18:18:42.064372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:53.686 18:18:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:53.686 18:18:42 -- common/autotest_common.sh@850 -- # return 0 00:31:53.686 18:18:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:53.686 18:18:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:53.686 18:18:42 -- common/autotest_common.sh@10 -- # set +x 00:31:53.686 18:18:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:53.686 18:18:42 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:53.686 18:18:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:53.686 18:18:42 -- common/autotest_common.sh@10 -- # set +x 00:31:53.686 Malloc0 00:31:53.686 18:18:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:53.686 18:18:42 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:53.686 18:18:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:53.686 18:18:42 -- common/autotest_common.sh@10 -- # set +x 00:31:53.686 [2024-04-15 18:18:42.465437] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:53.686 18:18:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:53.686 18:18:42 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:53.686 18:18:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:53.686 18:18:42 -- common/autotest_common.sh@10 -- # set +x 00:31:53.686 18:18:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:53.686 18:18:42 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:53.686 18:18:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:53.686 18:18:42 -- common/autotest_common.sh@10 -- # set +x 00:31:53.686 18:18:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:53.686 18:18:42 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:53.686 18:18:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:53.686 18:18:42 -- common/autotest_common.sh@10 -- # set +x 00:31:53.686 [2024-04-15 18:18:42.493694] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:53.686 18:18:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:53.686 18:18:42 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:53.686 18:18:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:53.686 18:18:42 -- common/autotest_common.sh@10 -- # set +x 00:31:53.686 18:18:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:53.686 18:18:42 -- host/target_disconnect.sh@50 -- # reconnectpid=3462407 00:31:53.686 18:18:42 -- host/target_disconnect.sh@52 -- # sleep 2 00:31:53.686 18:18:42 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:53.686 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.589 18:18:44 -- host/target_disconnect.sh@53 -- # kill -9 3462365 00:31:55.589 18:18:44 -- host/target_disconnect.sh@55 -- # sleep 2 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Write completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Write completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Write completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Write completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Write completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Write completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Write completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Write completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Write completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Write completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 [2024-04-15 18:18:44.519469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.589 Read completed with error (sct=0, sc=8) 00:31:55.589 starting I/O failed 00:31:55.590 Write completed with error (sct=0, sc=8) 00:31:55.590 starting I/O failed 00:31:55.590 Read completed with error (sct=0, sc=8) 00:31:55.590 starting I/O failed 00:31:55.590 Read completed with error (sct=0, sc=8) 00:31:55.590 starting I/O failed 00:31:55.590 Write completed with error (sct=0, sc=8) 00:31:55.590 starting I/O failed 00:31:55.590 Read completed with error (sct=0, sc=8) 00:31:55.590 starting I/O failed 00:31:55.590 Read completed with error (sct=0, sc=8) 00:31:55.590 starting I/O failed 00:31:55.590 Write completed with error (sct=0, sc=8) 00:31:55.590 starting I/O failed 00:31:55.590 Write completed with error (sct=0, sc=8) 00:31:55.590 starting I/O failed 00:31:55.590 Write completed with error (sct=0, sc=8) 00:31:55.590 starting I/O failed 00:31:55.590 Read completed with error (sct=0, sc=8) 00:31:55.590 starting I/O failed 00:31:55.590 Read completed with error (sct=0, sc=8) 00:31:55.590 starting I/O failed 00:31:55.590 Read completed with error (sct=0, sc=8) 00:31:55.590 starting I/O failed 00:31:55.590 Write completed with error (sct=0, sc=8) 00:31:55.590 starting I/O failed 00:31:55.590 Write completed with error (sct=0, sc=8) 00:31:55.590 starting I/O failed 00:31:55.590 Write completed with error (sct=0, sc=8) 00:31:55.590 starting I/O failed 00:31:55.590 Read completed with error (sct=0, sc=8) 00:31:55.590 starting I/O failed 00:31:55.590 [2024-04-15 18:18:44.519814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.590 [2024-04-15 18:18:44.520133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.520280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.520306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.520531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.520764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.520787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.521028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.521239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.521266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.521478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.521780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.521828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.522089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.522264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.522290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.522509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.522698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.522740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.522963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.523205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.523232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.523381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.523630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.523680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.523897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.524158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.524185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.524339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.524558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.524608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.524833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.525067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.525110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.525277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.525468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.525510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.525742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.525972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.525995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.526191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.526379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.526421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.526607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.526814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.526867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.527124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.527263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.527307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.527579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.527758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.527809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.528069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.528255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.528282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.528480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.528670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.528723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.528979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.529220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.529248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.529512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.529720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.529778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.529982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.530143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.530170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.530373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.530660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.530711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.530928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.531110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.531138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.590 qpair failed and we were unable to recover it. 00:31:55.590 [2024-04-15 18:18:44.531272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.590 [2024-04-15 18:18:44.531463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.531506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.531731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.532013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.532051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.532227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.532452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.532502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.532820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.533010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.533033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.533222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.533422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.533464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.533760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.533934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.533958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.534160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.534374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.534434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.534655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.534894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.534943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.535218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.535435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.535487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.535712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.535943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.535966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.536151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.536340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.536383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.536593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.536840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.536869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.537083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.537205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.537233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.537422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.537608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.537658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.537813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.538026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.538049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.538267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.538437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.538466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.538694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.539033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.539111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.539298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.539475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.539518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.539698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.539883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.539943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.540142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.540353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.540395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.540601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.540812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.540863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.541137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.541261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.591 [2024-04-15 18:18:44.541305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.591 qpair failed and we were unable to recover it. 00:31:55.591 [2024-04-15 18:18:44.541496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.859 [2024-04-15 18:18:44.541740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.859 [2024-04-15 18:18:44.541792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.859 qpair failed and we were unable to recover it. 00:31:55.859 [2024-04-15 18:18:44.542027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.859 [2024-04-15 18:18:44.542245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.859 [2024-04-15 18:18:44.542271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.859 qpair failed and we were unable to recover it. 00:31:55.859 [2024-04-15 18:18:44.542464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.542673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.542726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.542922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.543121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.543145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.543449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.543796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.543846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.544071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.544307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.544347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.544549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.544796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.544846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.545082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.545270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.545296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.545477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.545737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.545784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.545958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.546086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.546112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.546295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.546492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.546522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.546724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.546962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.547011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.547252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.547403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.547445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.547604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.547799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.547839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.548075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.548252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.548294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.548442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.548694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.548748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.549072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.549330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.549372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.549571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.549794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.549846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.550015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.550228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.550253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.550427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.550690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.550741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.550961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.551149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.551175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.551414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.551662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.551713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.552007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.552235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.552261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.552437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.552671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.552724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.552881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.553160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.553187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.553471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.553694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.553745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.860 [2024-04-15 18:18:44.553930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.554114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.860 [2024-04-15 18:18:44.554156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.860 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.554325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.554611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.554662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.554818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.555090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.555114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.555350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.555606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.555659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.555913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.556100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.556124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.556310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.556642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.556691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.556903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.557142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.557166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.557378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.557580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.557629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.557814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.558006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.558029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.558381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.558703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.558755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.558976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.559164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.559194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.559468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.559656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.559711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.559898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.560107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.560131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.560328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.560572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.560619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.560803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.560987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.561009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.561263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.561566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.561616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.561827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.561981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.562004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.562191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.562336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.562379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.562606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.562863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.562914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.563218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.563486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.563536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.563809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.564014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.564071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.564289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.564531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.564581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.564846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.565169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.565208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.565466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.565681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.565733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.565995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.566195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.566219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.566480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.566726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.566775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.861 qpair failed and we were unable to recover it. 00:31:55.861 [2024-04-15 18:18:44.567096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.861 [2024-04-15 18:18:44.567287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.567311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.567541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.567765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.567818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.568016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.568170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.568200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.568381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.568599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.568648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.568870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.569067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.569108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.569318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.569516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.569567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.569749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.569985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.570007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.570281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.570536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.570588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.570779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.570977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.571000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.571178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.571423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.571475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.571702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.571900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.571953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.572205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.572408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.572459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.572709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.573024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.573049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.573317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.573556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.573606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.573881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.574181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.574207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.574525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.574743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.574793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.575050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.575284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.575307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.575492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.575689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.575755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.576037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.576303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.576327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.576563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.576751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.576811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.577043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.577220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.577244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.577573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.577875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.577927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.578182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.578399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.578441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.578641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.578891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.578943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.579198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.579380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.579445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.862 qpair failed and we were unable to recover it. 00:31:55.862 [2024-04-15 18:18:44.579704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.862 [2024-04-15 18:18:44.579871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.579893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.580080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.580227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.580251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.580483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.580759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.580810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.581121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.581343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.581385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.581653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.581888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.581940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.582176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.582387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.582431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.582718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.583097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.583122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.583359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.583521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.583576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.583848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.584101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.584125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.584318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.584524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.584582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.584782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.584973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.584996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.585292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.585561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.585609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.585837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.586082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.586106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.586299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.586633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.586681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.587012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.587267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.587291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.587547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.587795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.587839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.588114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.588247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.588270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.588470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.588658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.588716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.588986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.589157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.589183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.589382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.589540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.589591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.589763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.589946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.589968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.590220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.590414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.590467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.590695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.590881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.590903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.591227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.591530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.591582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.591905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.592162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.592192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.592449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.592700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.592751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.863 qpair failed and we were unable to recover it. 00:31:55.863 [2024-04-15 18:18:44.592915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.863 [2024-04-15 18:18:44.593191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.593236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.593496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.593740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.593791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.594008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.594247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.594273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.594543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.594758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.594805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.594991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.595197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.595222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.595402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.595625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.595676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.595899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.596144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.596168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.596343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.596552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.596602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.596811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.596965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.596996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.597303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.597624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.597675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.597928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.598120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.598145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.598381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.598652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.598704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.598902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.599092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.599131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.599321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.599525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.599587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.599815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.600021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.600065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.600264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.600493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.600537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.600760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.600947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.600970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.601151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.601312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.601355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.601564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.601743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.601772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.601959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.602198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.602242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.602446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.602682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.602733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.603001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.603192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.603218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.603392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.603634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.603684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.603897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.604146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.604172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.864 qpair failed and we were unable to recover it. 00:31:55.864 [2024-04-15 18:18:44.604332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.864 [2024-04-15 18:18:44.604528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.604580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.604724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.604877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.604900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.605130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.605331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.605369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.605551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.605726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.605749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.605960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.606120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.606146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.606317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.606465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.606508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.606722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.606917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.606941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.607148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.607378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.607420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.607636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.607861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.607884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.608070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.608285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.608311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.608536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.608752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.608808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.609070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.609281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.609306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.609481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.609747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.609797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.610031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.610230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.610256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.610499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.610754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.610805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.611137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.611308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.611348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.611553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.611763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.611814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.611981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.612142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.612182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.612386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.612602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.612652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.865 qpair failed and we were unable to recover it. 00:31:55.865 [2024-04-15 18:18:44.612886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.865 [2024-04-15 18:18:44.613152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.613177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.613465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.613768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.613819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.614016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.614158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.614192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.614468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.614772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.614823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.615066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.615277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.615308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.615543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.615789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.615830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.616117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.616320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.616358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.616611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.616908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.616960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.617178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.617361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.617401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.617609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.617804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.617857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.618081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.618294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.618318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.618593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.618892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.618941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.619095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.619310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.619354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.619510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.619811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.619859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.620088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.620245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.620286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.620470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.620700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.620752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.621004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.621210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.621234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.621455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.621679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.621730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.621951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.622123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.622148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.622386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.622617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.622668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.622866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.623033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.623089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.623273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.623494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.623549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.623709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.623995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.624019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.624211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.624436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.624489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.624723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.624882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.624904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.625039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.625253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.866 [2024-04-15 18:18:44.625300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.866 qpair failed and we were unable to recover it. 00:31:55.866 [2024-04-15 18:18:44.625599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.625907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.625958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.626211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.626500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.626553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.626803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.627022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.627076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.627304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.627473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.627527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.627769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.627982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.628004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.628195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.628350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.628400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.628551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.628756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.628809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.629062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.629240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.629263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.629503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.629688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.629737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.629991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.630203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.630228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.630522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.630794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.630845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.631047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.631247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.631271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.631524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.631739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.631794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.632071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.632253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.632277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.632443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.632780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.632832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.633097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.633269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.633296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.633548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.633904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.633956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.634170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.634395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.634438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.634642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.634908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.634959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.635148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.635379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.635430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.635750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.636034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.636063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.636277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.636466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.636524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.636712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.636923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.636982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.637307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.637567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.637617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.637876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.638192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.638218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.638453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.638629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.638694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.638953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.639134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.639158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.639328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.639576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.639620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.867 qpair failed and we were unable to recover it. 00:31:55.867 [2024-04-15 18:18:44.639905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.867 [2024-04-15 18:18:44.640121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.640154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.640429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.640639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.640689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.640873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.641054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.641082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.641283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.641487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.641538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.641780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.641983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.642005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.642197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.642358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.642387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.642614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.642892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.642943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.643266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.643570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.643624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.643881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.644115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.644139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.644301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.644530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.644579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.644788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.645096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.645121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.645300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.645584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.645635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.645959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.646245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.646270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.646485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.646679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.646740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.646974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.647211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.647235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.647490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.647742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.647795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.648115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.648414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.648466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.648781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.649090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.649114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.649319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.649485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.649541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.649698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.649963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.650014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.650229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.650507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.650561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.650817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.651011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.651034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.651367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.651594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.651645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.651843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.652054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.652109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.652383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.652680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.652733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.653019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.653343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.868 [2024-04-15 18:18:44.653386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.868 qpair failed and we were unable to recover it. 00:31:55.868 [2024-04-15 18:18:44.653668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.653883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.653935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.654096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.654274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.654298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.654571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.654719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.654773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.655009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.655205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.655230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.655370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.655575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.655624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.655815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.656017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.656040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.656198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.656370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.656411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.656715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.656989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.657011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.657186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.657370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.657398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.657576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.657819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.657871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.658151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.658406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.658458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.658721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.658991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.659040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.659362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.659577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.659625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.659813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.659987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.660010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.660362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.660568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.660617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.660913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.661196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.661221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.661405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.661680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.661728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.661980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.662145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.662171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.662398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.662733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.662782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.663013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.663168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.663192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.663363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.663567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.663620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.663833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.664121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.664146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.664364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.664629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.664679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.664954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.665134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.665158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.665346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.665591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.665640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.665845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.666036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.666080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.666305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.666463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.666515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.666808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.666981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.667004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.869 qpair failed and we were unable to recover it. 00:31:55.869 [2024-04-15 18:18:44.667309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.667512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.869 [2024-04-15 18:18:44.667572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.667816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.667965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.667988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.668273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.668537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.668589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.668852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.669099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.669123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.669332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.669614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.669666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.669951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.670153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.670178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.670351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.670664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.670712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.670931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.671154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.671179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.671378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.671566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.671619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.671852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.672114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.672140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.672426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.672642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.672699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.672983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.673272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.673297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.673538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.673756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.673810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.674017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.674239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.674275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.674558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.674822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.674871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.675140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.675312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.675335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.675543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.675805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.675855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.676098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.676266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.676289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.676441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.676740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.676789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.676976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.677129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.677154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.677448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.677631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.677686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.677919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.678179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.678210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.678479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.678727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.678771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.679090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.679369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.679431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.870 qpair failed and we were unable to recover it. 00:31:55.870 [2024-04-15 18:18:44.679740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.870 [2024-04-15 18:18:44.679911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.679934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.680131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.680376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.680416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.680669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.680930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.680980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.681259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.681509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.681560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.681807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.682065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.682089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.682271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.682464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.682506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.682820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.683094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.683118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.683318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.683511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.683560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.683761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.683988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.684010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.684277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.684554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.684602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.684801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.684958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.684982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.685186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.685455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.685506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.685724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.685890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.685913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.686150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.686403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.686456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.686717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.686902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.686924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.687078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.687298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.687344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.687553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.687841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.687892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.688091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.688260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.688300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.688483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.688680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.688735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.688952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.689192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.689237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.689552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.689830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.689878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.690144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.690313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.690355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.690610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.690905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.690952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.691229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.691489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.691536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.691750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.691915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.691937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.692184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.692384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.692427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.692657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.692877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.692928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.693191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.693470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.693522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.693725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.693963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.693987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.871 qpair failed and we were unable to recover it. 00:31:55.871 [2024-04-15 18:18:44.694283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.871 [2024-04-15 18:18:44.694593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.694644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.694967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.695259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.695284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.695492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.695693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.695740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.695989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.696225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.696250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.696483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.696703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.696761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.696939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.697097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.697122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.697347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.697597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.697641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.697871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.698082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.698133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.698378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.698654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.698703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.698977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.699250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.699275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.699528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.699746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.699795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.700056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.700317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.700340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.700539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.700714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.700743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.700916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.701100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.701143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.701308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.701490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.701549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.701773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.701946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.701969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.702179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.702473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.702524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.702726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.702898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.702921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.703119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.703330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.703370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.703706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.704013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.704035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.704216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.704369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.704410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.704670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.704914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.704957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.705176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.705414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.705465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.705722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.705931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.705962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.706254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.706458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.706508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.706679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.706865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.706888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.707100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.707311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.707354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.707551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.707755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.707811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.708028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.708228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.708253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.872 qpair failed and we were unable to recover it. 00:31:55.872 [2024-04-15 18:18:44.708463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.872 [2024-04-15 18:18:44.708702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.708752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.708997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.709226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.709250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.709448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.709659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.709713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.709948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.710209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.710234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.710443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.710642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.710690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.710895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.711169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.711213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.711463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.711709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.711761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.711992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.712148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.712172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.712438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.712650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.712707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.712960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.713153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.713178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.713393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.713596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.713646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.713961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.714286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.714330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.714596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.714862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.714916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.715112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.715324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.715375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.715605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.715782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.715833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.716036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.716227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.716250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.716418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.716705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.716756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.716986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.717189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.717213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.717507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.717781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.717831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.718065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.718227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.718252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.718456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.718646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.718702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.718945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.719236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.719262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.719599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.719887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.719941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.720259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.720606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.720656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.720920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.721240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.721265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.721521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.721762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.721814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.721975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.722159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.722199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.722506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.722774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.722827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.723045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.723266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.723290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.873 qpair failed and we were unable to recover it. 00:31:55.873 [2024-04-15 18:18:44.723475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.873 [2024-04-15 18:18:44.723687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.723737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.723931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.724066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.724090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.724287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.724519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.724568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.724812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.725072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.725125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.725469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.725814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.725866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.726120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.726366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.726390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.726635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.726891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.726942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.727222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.727419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.727469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.727665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.727887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.727938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.728156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.728409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.728460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.728781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.729089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.729113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.729311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.729507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.729560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.729775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.729990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.730022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.730311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.730518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.730568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.730842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.731095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.731119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.731320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.731525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.731577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.731823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.732091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.732134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.732332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.732579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.732630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.732855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.733104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.733128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.733416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.733637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.733687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.733871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.734120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.734144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.734465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.734708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.734756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.734941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.735171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.735195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.735457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.735797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.735847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.874 [2024-04-15 18:18:44.736089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.736263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.874 [2024-04-15 18:18:44.736286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.874 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.736542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.736754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.736807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.737015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.737183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.737208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.737391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.737600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.737650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.737943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.738233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.738258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.738458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.738703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.738751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.738935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.739139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.739176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.739366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.739552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.739607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.739806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.740099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.740124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.740371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.740580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.740638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.740887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.741065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.741103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.741290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.741472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.741537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.741814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.742016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.742053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.742298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.742529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.742578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.742841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.743052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.743101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.743308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.743529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.743586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.743905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.744227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.744252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.744550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.744851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.744900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.745128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.745303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.745346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.745608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.745806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.745858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.746140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.746383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.746436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.746732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.746951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.746974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.747172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.747331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.747374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.747592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.747913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.747962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.748207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.748444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.748495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.748747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.748928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.748950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.749119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.749304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.749346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.749601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.749831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.749879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.750113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.750320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.750363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.875 [2024-04-15 18:18:44.750614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.750809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.875 [2024-04-15 18:18:44.750859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.875 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.751096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.751278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.751302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.751531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.751758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.751808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.752002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.752158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.752183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.752405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.752622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.752673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.752842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.753043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.753099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.753334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.753607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.753658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.753909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.754114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.754163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.754445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.754757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.754809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.755036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.755386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.755411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.755672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.755871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.755920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.756086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.756250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.756293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.756504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.756682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.756742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.756939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.757122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.757156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.757348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.757531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.757573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.757738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.758077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.758102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.758438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.758777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.758826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.759079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.759221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.759244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.759505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.759797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.759846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.760070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.760257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.760280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.760525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.760743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.760798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.761118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.761403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.761427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.761632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.761846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.761895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.762119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.762374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.762397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.762651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.762886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.762929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.763087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.763306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.763362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.763587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.763839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.763891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.764149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.764291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.764333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.764492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.764711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.764760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.876 [2024-04-15 18:18:44.764908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.765150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.876 [2024-04-15 18:18:44.765176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.876 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.765471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.765748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.765771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.765990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.766159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.766183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.766401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.766624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.766676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.766934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.767128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.767152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.767339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.767621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.767669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.767945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.768199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.768243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.768437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.768681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.768731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.769006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.769273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.769298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.769533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.769732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.769784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.770007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.770204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.770237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.770422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.770654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.770702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.770863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.771069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.771096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.771249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.771452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.771507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.771757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.771990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.772013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.772235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.772414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.772458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.772632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.772839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.772882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.773049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.773205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.773230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.773410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.773607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.773652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.773857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.774047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.774079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.774271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.774446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.774487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.774663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.774838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.774880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.775065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.775216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.775260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.775461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.775676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.775718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.775857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.776050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.776086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.776289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.776532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.776561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.776783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.776962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.776984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.777178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.777393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.777435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.777632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.777820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.877 [2024-04-15 18:18:44.777860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.877 qpair failed and we were unable to recover it. 00:31:55.877 [2024-04-15 18:18:44.778070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.778253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.778277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.778488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.778638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.778681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.778912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.779116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.779152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.779340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.779541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.779583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.779740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.779921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.779959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.780156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.780370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.780418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.780627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.780829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.780870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.781070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.781248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.781272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.781458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.781687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.781716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.781885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.782066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.782092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.782257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.782408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.782451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.782597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.782796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.782840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.783067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.783237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.783262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.783467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.783661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.783704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.783930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.784112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.784136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.784312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.784495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.784536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.784686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.784834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.784859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.785085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.785251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.785295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.785508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.785706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.785748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.785922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.786094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.786125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.786340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.786544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.786586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.786759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.786923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.786946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.787169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.787399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.787441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.787617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.787819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.787867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.788054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.788239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.788262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.788418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.788636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.788678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.788890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.789072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.789096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.789276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.789497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.789540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.789709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.789899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.789940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.878 qpair failed and we were unable to recover it. 00:31:55.878 [2024-04-15 18:18:44.790087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.878 [2024-04-15 18:18:44.790283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.790325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.790497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.790740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.790783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.790942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.791131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.791159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.791332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.791552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.791595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.791767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.791991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.792018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.792224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.792411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.792458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.792629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.792851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.792894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.793074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.793226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.793268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.793452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.793669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.793711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.793918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.794132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.794177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.794396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.794578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.794621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.794786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.794995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.795018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.795203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.795402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.795444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.795611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.795792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.795824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.796022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.796202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.796250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.796437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.796651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.796693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.796869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.797029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.797052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.797250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.797488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.797532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.797674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.797869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.797912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.798117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.798281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.798324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.798458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.798690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.798732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.798922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.799101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.799147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.799324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.799486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.799528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.799719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.799883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.799907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.800115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.800293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.800347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.879 qpair failed and we were unable to recover it. 00:31:55.879 [2024-04-15 18:18:44.800498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.879 [2024-04-15 18:18:44.800727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.880 [2024-04-15 18:18:44.800770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.880 qpair failed and we were unable to recover it. 00:31:55.880 [2024-04-15 18:18:44.800937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.880 [2024-04-15 18:18:44.801095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.880 [2024-04-15 18:18:44.801122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.880 qpair failed and we were unable to recover it. 00:31:55.880 [2024-04-15 18:18:44.801333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.880 [2024-04-15 18:18:44.801527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.880 [2024-04-15 18:18:44.801569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.880 qpair failed and we were unable to recover it. 00:31:55.880 [2024-04-15 18:18:44.801786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.880 [2024-04-15 18:18:44.801911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.880 [2024-04-15 18:18:44.801935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.880 qpair failed and we were unable to recover it. 00:31:55.880 [2024-04-15 18:18:44.802113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.880 [2024-04-15 18:18:44.802328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.880 [2024-04-15 18:18:44.802371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.880 qpair failed and we were unable to recover it. 00:31:55.880 [2024-04-15 18:18:44.802581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.880 [2024-04-15 18:18:44.802744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.880 [2024-04-15 18:18:44.802767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.880 qpair failed and we were unable to recover it. 00:31:55.880 [2024-04-15 18:18:44.802905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.880 [2024-04-15 18:18:44.803105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.880 [2024-04-15 18:18:44.803130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:55.880 qpair failed and we were unable to recover it. 00:31:55.880 [2024-04-15 18:18:44.803306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.150 [2024-04-15 18:18:44.803481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.150 [2024-04-15 18:18:44.803512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.150 qpair failed and we were unable to recover it. 00:31:56.150 [2024-04-15 18:18:44.803745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.150 [2024-04-15 18:18:44.803988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.150 [2024-04-15 18:18:44.804026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.150 qpair failed and we were unable to recover it. 00:31:56.150 [2024-04-15 18:18:44.804306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.150 [2024-04-15 18:18:44.804520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.150 [2024-04-15 18:18:44.804563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.150 qpair failed and we were unable to recover it. 00:31:56.150 [2024-04-15 18:18:44.804717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.150 [2024-04-15 18:18:44.804906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.150 [2024-04-15 18:18:44.804929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.150 qpair failed and we were unable to recover it. 00:31:56.150 [2024-04-15 18:18:44.805115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.150 [2024-04-15 18:18:44.805329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.805371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.805563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.805737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.805781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.805927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.806108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.806151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.806340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.806550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.806593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.806775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.806999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.807024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.807217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.807418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.807461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.807650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.807878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.807922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.808116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.808326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.808366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.808545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.808695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.808745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.808947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.809122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.809152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.809328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.809530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.809571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.809759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.810001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.810027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.810216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.810447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.810489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.810714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.810922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.810946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.811124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.811341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.811382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.811609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.811806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.811847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.812003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.812188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.812234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.812437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.812644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.812686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.812876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.813067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.813102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.813288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.813455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.813498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.813686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.813867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.813909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.814109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.814302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.814331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.814491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.814681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.814725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.814872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.815075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.815101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.815258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.815436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.815478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.815674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.815817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.815855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.816078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.816256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.816280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.816451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.816635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.816679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.816876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.817070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.151 [2024-04-15 18:18:44.817094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.151 qpair failed and we were unable to recover it. 00:31:56.151 [2024-04-15 18:18:44.817323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.817523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.817568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.817776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.817960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.817982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.818216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.818362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.818405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.818570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.818749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.818792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.819010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.819173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.819197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.819403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.819605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.819649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.819826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.820042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.820085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.820238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.820385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.820428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.820628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.820869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.820911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.821122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.821273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.821316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.821509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.821700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.821742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.821893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.822067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.822093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.822323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.822491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.822534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.822718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.822955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.822997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.823179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.823353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.823383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.823597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.823769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.823812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.824008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.824208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.824232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.824431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.824607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.824648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.824857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.825076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.825101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.825309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.825532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.825575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.825740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.825946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.825970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.826156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.826348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.826391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.826566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.826767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.826811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.826946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.827137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.827185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.827386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.827583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.827626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.827781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.827924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.827947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.828106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.828265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.828302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.828462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.828642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.828685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.828887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.829051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.152 [2024-04-15 18:18:44.829080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.152 qpair failed and we were unable to recover it. 00:31:56.152 [2024-04-15 18:18:44.829256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.829415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.829460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.829677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.829839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.829863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.830022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.830196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.830240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.830431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.830655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.830698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.830896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.831090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.831124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.831286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.831492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.831536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.831724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.831910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.831933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.832114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.832268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.832309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.832462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.832648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.832692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.832876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.833086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.833111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.833275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.833476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.833519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.833708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.833852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.833891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.834042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.834190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.834233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.834418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.834606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.834635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.834841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.835010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.835034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.835272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.835516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.835563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.835755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.835940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.835962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.836151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.836312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.836355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.836502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.836648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.836693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.836898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.837094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.837119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.837289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.837447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.837471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.837652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.837822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.837860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.838030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.838203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.838246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.838451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.838601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.838643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.838822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.839010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.839034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.839225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.839409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.839451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.839623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.839811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.839856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.840031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.840240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.840283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.153 qpair failed and we were unable to recover it. 00:31:56.153 [2024-04-15 18:18:44.840488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.840676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.153 [2024-04-15 18:18:44.840705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.840868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.841034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.841080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.841244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.841460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.841503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.841715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.841916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.841940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.842128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.842282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.842323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.842522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.842695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.842740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.842898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.843030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.843054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.843245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.843433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.843476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.843614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.843809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.843855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.844018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.844248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.844292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.844507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.844701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.844753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.844980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.845162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.845186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.845390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.845553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.845595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.845786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.846007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.846046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.846203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.846450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.846491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.846689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.846851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.846895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.847070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.847248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.847291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.847453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.847645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.847690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.847918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.848095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.848138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.848294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.848488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.848532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.848674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.848821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.848846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.849017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.849213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.849243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.849454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.849640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.849682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.849906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.850069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.850098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.850319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.850508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.850537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.850757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.850940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.850964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.851111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.851307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.851351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.851525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.851712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.851756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.851964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.852146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.852192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.154 qpair failed and we were unable to recover it. 00:31:56.154 [2024-04-15 18:18:44.852387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.154 [2024-04-15 18:18:44.852602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.852645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.852867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.853056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.853094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.853277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.853431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.853473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.853612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.853826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.853866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.854039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.854180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.854223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.854401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.854579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.854621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.854759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.854998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.855022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.855258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.855445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.855489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.855680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.855875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.855916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.856113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.856278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.856321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.856452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.856666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.856709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.856877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.857077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.857104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.857314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.857491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.857520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.857754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.857927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.857951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.858143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.858320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.858367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.858538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.858719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.858749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.858923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.859089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.859133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.859339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.859517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.859547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.859721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.859914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.859952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.860146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.860332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.860375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.860516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.860690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.860732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.860864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.861081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.861121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.861327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.861525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.861565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.155 qpair failed and we were unable to recover it. 00:31:56.155 [2024-04-15 18:18:44.861753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.861957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.155 [2024-04-15 18:18:44.861980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.862159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.862330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.862377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.862531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.862668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.862711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.862877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.863031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.863075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.863283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.863509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.863552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.863756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.863958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.863981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.864166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.864377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.864420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.864616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.864767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.864811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.864993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.865162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.865206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.865404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.865599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.865640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.865796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.866010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.866034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.866207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.866396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.866439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.866593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.866771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.866800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.866975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.867142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.867189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.867391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.867565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.867607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.867816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.867958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.867994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.868153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.868371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.868399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.868582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.868824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.868866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.869037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.869227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.869270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.869575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.869836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.869879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.870031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.870185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.870230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.870404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.870675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.870717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.870931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.871128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.871153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.871392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.871540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.871582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.871761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.871994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.872017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.872223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.872455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.872499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.872767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.873036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.873067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.873283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.873521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.873563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.873816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.873994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.874016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.156 qpair failed and we were unable to recover it. 00:31:56.156 [2024-04-15 18:18:44.874204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.874486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.156 [2024-04-15 18:18:44.874528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.874783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.875040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.875085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.875246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.875388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.875429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.875685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.875984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.876026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.876237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.876441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.876483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.876686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.876944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.876985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.877155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.877315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.877359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.877608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.877804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.877846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.878028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.878217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.878240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.878418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.878579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.878622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.878805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.878954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.878990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.879211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.879403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.879446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.879656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.879836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.879877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.880132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.880296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.880338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.880516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.880739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.880781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.880964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.881171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.881195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.881404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.881676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.881727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.882011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.882236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.882261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.882574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.882791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.882834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.883023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.883214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.883239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.883459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.883672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.883715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.883890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.884091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.884124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.884339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.884505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.884533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.884735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.884946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.884993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.885238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.885455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.885498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.885735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.885965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.885988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.886166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.886405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.886448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.886612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.886866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.886909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.887109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.887345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.887389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.157 [2024-04-15 18:18:44.887562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.887812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.157 [2024-04-15 18:18:44.887853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.157 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.888075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.888310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.888340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.888523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.888816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.888858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.889102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.889369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.889412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.889696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.889886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.889929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.890093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.890286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.890333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.890558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.890834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.890877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.891052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.891266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.891290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.891452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.891593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.891644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.891960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.892271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.892298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.892504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.892730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.892771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.893051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.893258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.893294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.893536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.893730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.893772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.893989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.894151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.894174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.894423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.894661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.894703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.894934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.895205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.895230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.895482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.895661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.895702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.895947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.896155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.896179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.896371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.896637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.896679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.896945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.897193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.897217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.897454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.897653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.897697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.897892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.898209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.898234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.898442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.898677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.898719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.898957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.899171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.899204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.899383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.899626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.899669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.899939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.900176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.158 [2024-04-15 18:18:44.900201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.158 qpair failed and we were unable to recover it. 00:31:56.158 [2024-04-15 18:18:44.900399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.900677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.900719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.900899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.901129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.901159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.901367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.901570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.901612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.901900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.902138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.902181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.902376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.902547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.902588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.902898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.903193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.903236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.903463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.903683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.903725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.903903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.904084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.904109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.904362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.904573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.904615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.904890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.905165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.905190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.905434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.905624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.905666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.905917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.906189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.906214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.906423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.906665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.906708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.906862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.907093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.907117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.907441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.907736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.907779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.907999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.908239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.908263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.908460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.908659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.908701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.908951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.909161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.909195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.909388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.909601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.909653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.909927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.910154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.910179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.910320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.910549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.910592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.910811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.910984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.911007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.911265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.911465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.911508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.911705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.911925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.911947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.912129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.912386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.912429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.912684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.912935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.912978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.913220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.913434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.913476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.913675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.913891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.913933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.159 qpair failed and we were unable to recover it. 00:31:56.159 [2024-04-15 18:18:44.914178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.914357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.159 [2024-04-15 18:18:44.914400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.914643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.914875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.914917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.915171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.915353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.915395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.915599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.915768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.915810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.916076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.916335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.916378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.916585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.916769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.916812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.916963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.917151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.917194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.917375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.917628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.917672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.917939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.918259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.918303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.918542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.918787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.918829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.919124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.919376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.919419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.919643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.919835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.919877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.920181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.920342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.920384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.920542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.920772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.920814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.920991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.921118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.921153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.921383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.921561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.921603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.921769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.921949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.921986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.922287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.922559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.922601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.922848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.923117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.923143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.923426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.923704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.923746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.923925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.924140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.924169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.924361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.924580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.160 [2024-04-15 18:18:44.924621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.160 qpair failed and we were unable to recover it. 00:31:56.160 [2024-04-15 18:18:44.924775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.925086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.925111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.925355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.925547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.925590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.925789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.925977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.926000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.926241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.926406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.926449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.926644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.926849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.926890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.927118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.927345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.927382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.927537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.927726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.927767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.927932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.928096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.928129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.928395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.928692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.928739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.928974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.929130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.929169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.929440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.929689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.929731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.929924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.930129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.930154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.930305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.930501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.930543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.930780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.931002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.931025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.931210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.931483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.931524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.931749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.932004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.932027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.932244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.932405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.932446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.932651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.932910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.932952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.933282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.933525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.933571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.933716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.933944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.933967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.934145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.934391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.934432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.934622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.934827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.934880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.935161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.935442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.935485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.935692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.935910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.935951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.936153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.936340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.936369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.936559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.936839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.936881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.937111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.937303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.937346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.937524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.937719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.937761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.937988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.938143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.161 [2024-04-15 18:18:44.938173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.161 qpair failed and we were unable to recover it. 00:31:56.161 [2024-04-15 18:18:44.938386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.938612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.938654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.938930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.939174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.939199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.939437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.939637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.939679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.939915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.940127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.940164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.940322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.940510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.940562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.940885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.941189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.941214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.941470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.941684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.941727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.941885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.942128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.942166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.942383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.942656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.942700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.942929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.943097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.943135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.943338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.943516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.943558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.943769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.943961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.943984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.944260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.944474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.944516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.944720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.944945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.944968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.945230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.945435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.945477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.945739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.945959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.945987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.946190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.946372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.946411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.946679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.946921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.946962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.947215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.947431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.947472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.947685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.947950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.947992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.948159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.948334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.948376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.948629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.948939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.948981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.949184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.949414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.949457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.949696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.949897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.949920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.950099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.950319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.950365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.162 qpair failed and we were unable to recover it. 00:31:56.162 [2024-04-15 18:18:44.950556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.950812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.162 [2024-04-15 18:18:44.950854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.951089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.951259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.951303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.951449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.951651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.951693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.951927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.952150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.952190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.952346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.952580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.952622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.952898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.953134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.953176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.953331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.953618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.953642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.953928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.954142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.954185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.954398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.954571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.954613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.954813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.955010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.955033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.955222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.955460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.955503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.955792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.956084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.956133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.956361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.956603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.956645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.956822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.957037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.957087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.957225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.957395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.957438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.957607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.957813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.957855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.957977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.958175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.958201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.958435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.958658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.958700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.958901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.959189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.959232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.959496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.959739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.959781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.959993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.960196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.960221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.960455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.960648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.960689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.960888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.961119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.961144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.961398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.961572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.961614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.961786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.961988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.962011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.962278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.962499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.962541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.962802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.963036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.963080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.963264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.963468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.963521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.163 [2024-04-15 18:18:44.963734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.963927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.163 [2024-04-15 18:18:44.963949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.163 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.964096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.964258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.964300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.964445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.964675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.964717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.965091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.965335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.965377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.965589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.965805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.965848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.966026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.966230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.966254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.966465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.966720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.966762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.966956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.967127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.967151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.967361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.967599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.967640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.967835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.968020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.968043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.968213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.968423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.968464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.968707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.968883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.968925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.969182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.969449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.969492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.969650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.969854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.969900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.970145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.970447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.970490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.970686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.970910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.970934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.971170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.971419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.971461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.971773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.972036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.972087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.972307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.972523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.972565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.972830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.973010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.973032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.973308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.973453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.973481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.973668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.973907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.973949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.974113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.974351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.974394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.164 qpair failed and we were unable to recover it. 00:31:56.164 [2024-04-15 18:18:44.974630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.974898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.164 [2024-04-15 18:18:44.974940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.975172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.975420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.975462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.975780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.976067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.976091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.976285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.976509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.976551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.976850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.977100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.977125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.977338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.977557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.977600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.977900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.978220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.978246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.978480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.978698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.978749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.979082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.979306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.979330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.979557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.979848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.979890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.980131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.980331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.980369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.980520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.980713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.980756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.980967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.981131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.981169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.981403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.981664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.981706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.981920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.982192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.982218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.982406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.982639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.982681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.982887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.983151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.983177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.983468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.983697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.983739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.983920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.984095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.984120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.984282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.984516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.984558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.984744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.984931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.984954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.985178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.985373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.985412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.985598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.985833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.985876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.986087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.986278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.986320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.986529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.986744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.986786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.986980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.987180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.987204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.165 qpair failed and we were unable to recover it. 00:31:56.165 [2024-04-15 18:18:44.987443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.165 [2024-04-15 18:18:44.987652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.987703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.987893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.988158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.988183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.988380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.988662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.988704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.988940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.989110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.989149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.989293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.989481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.989522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.989706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.989902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.989925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.990132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.990393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.990434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.990682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.990880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.990902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.991056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.991303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.991347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.991598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.991761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.991803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.991935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.992091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.992130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.992409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.992675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.992717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.992882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.993091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.993115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.993284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.993485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.993526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.993801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.993985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.994008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.994228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.994389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.994429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.994633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.994814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.994856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.995121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.995324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.995367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.995646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.995881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.995923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.996127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.996311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.996356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.996561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.996746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.996787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.996918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.997165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.997208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.997497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.997656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.997698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.997917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.998104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.998145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.998321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.998601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.998644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.998880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.999084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.999108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.999307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.999511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:44.999566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:44.999832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:45.000104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:45.000129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:45.000360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:45.000657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:45.000703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.166 qpair failed and we were unable to recover it. 00:31:56.166 [2024-04-15 18:18:45.000946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.166 [2024-04-15 18:18:45.001224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.001248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.001523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.001816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.001845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.002113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.002519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.002555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.002805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.003053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.003100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.003347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.003541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.003593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.003780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.003961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.003984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.004204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.004363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.004391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.004642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.004817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.004868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.005102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.005278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.005302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.005531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.005683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.005735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.005911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.006093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.006117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.006403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.006648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.006699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.006953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.007153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.007203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.007484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.007729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.007777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.007926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.008149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.008188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.008355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.008574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.008625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.008840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.009096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.009120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.009387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.009602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.009652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.009857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.010020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.010044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.010301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.010566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.010621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.010858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.011030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.011052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.011286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.011535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.011584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.011804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.012085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.012123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.012361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.012686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.012737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.012986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.013224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.013249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.013490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.013743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.013796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.014032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.014215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.014238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.014480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.014705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.014759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.015001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.015172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.015197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.167 qpair failed and we were unable to recover it. 00:31:56.167 [2024-04-15 18:18:45.015423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.167 [2024-04-15 18:18:45.015638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.015689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.015891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.016213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.016239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.016467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.016731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.016779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.017098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.017299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.017323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.017552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.017796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.017844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.018055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.018314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.018339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.018619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.018763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.018831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.019026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.019251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.019276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.019533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.019766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.019809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.019988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.020190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.020215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.020384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.020579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.020631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.020809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.020970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.021007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.021193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.021427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.021472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.021612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.021801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.021844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.022049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.022268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.022311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.022480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.022640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.022683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.022849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.023026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.023050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.023234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.023420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.023449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.023668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.023843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.023885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.024036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.024239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.024297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.024482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.024686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.024728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.024895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.025041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.025074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.025247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.025435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.025479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.025650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.025880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.025924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.026118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.026275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.026331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.026464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.026684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.026725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.026893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.027018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.027043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.027213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.027380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.027410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.027643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.027866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.168 [2024-04-15 18:18:45.027890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.168 qpair failed and we were unable to recover it. 00:31:56.168 [2024-04-15 18:18:45.028083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.028299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.028355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.028531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.028693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.028736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.028877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.029069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.029092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.029301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.029471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.029512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.029671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.029883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.029926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.030084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.030255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.030284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.030424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.030588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.030617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.030793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.030989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.031028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.031220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.031416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.031457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.031609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.031775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.031818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.032016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.032156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.032200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.032352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.032536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.032579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.032747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.032928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.032967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.033144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.033310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.033354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.033536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.033746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.033789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.033983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.034186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.034229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.034425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.034617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.034661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.034841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.035000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.035038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.035222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.035401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.035443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.035653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.035846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.035890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.036088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.036290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.036335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.036529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.036725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.036767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.036953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.037143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.037177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.037362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.037526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.169 [2024-04-15 18:18:45.037568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.169 qpair failed and we were unable to recover it. 00:31:56.169 [2024-04-15 18:18:45.037750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.037941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.037965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.038162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.038375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.038418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.038570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.038722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.038764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.038957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.039146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.039189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.039373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.039568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.039592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.039745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.039935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.039959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.040121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.040298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.040342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.040506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.040689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.040718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.040949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.041121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.041165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.041355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.041535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.041565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.041746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.041949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.041972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.042158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.042302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.042345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.042502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.042735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.042779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.042955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.043182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.043225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.043414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.043572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.043596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.043802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.043957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.043980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.044160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.044318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.044361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.044522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.044739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.044784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.045000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.045160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.045202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.045384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.045594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.045636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.045789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.046011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.046035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.046215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.046403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.046446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.046629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.046839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.046881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.047065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.047250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.047292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.047513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.047714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.047755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.170 qpair failed and we were unable to recover it. 00:31:56.170 [2024-04-15 18:18:45.047943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.048121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.170 [2024-04-15 18:18:45.048161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.048315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.048502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.048548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.048738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.048942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.048966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.049154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.049302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.049346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.049485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.049704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.049745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.049902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.050091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.050129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.050358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.050569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.050611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.050753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.050929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.050954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.051143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.051319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.051366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.051528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.051707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.051749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.051905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.052099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.052125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.052319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.052548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.052591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.052764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.052984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.053007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.053228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.053371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.053413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.053555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.053733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.053776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.053929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.054099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.054124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.054310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.054521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.054563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.054716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.054882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.054906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.055094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.055274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.055318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.055481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.055656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.055698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.055900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.056074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.056120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.056259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.056461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.056502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.056658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.056843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.056881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.057078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.057263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.057307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.057500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.057688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.057732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.057890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.058056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.058090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.058268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.058462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.058492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.058719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.058902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.058925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.059138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.059335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.059377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.171 qpair failed and we were unable to recover it. 00:31:56.171 [2024-04-15 18:18:45.059556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.171 [2024-04-15 18:18:45.059716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.059759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.059928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.060128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.060171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.060355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.060497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.060541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.060708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.060921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.060944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.061159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.061317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.061341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.061532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.061701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.061743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.061937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.062108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.062138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.062316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.062487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.062516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.062702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.062862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.062885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.063080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.063253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.063296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.063511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.063663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.063706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.063854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.064000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.064024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.064174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.064331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.064373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.064541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.064704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.064751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.064957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.065167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.065206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.065367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.065540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.065565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.065745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.065938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.065962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.066142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.066325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.066368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.066547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.066687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.066731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.066905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.067072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.067111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.067264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.067472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.067515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.067700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.067858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.067882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.068050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.068212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.068254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.068408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.068581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.068612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.068758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.068905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.068933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.069089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.069286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.069329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.172 qpair failed and we were unable to recover it. 00:31:56.172 [2024-04-15 18:18:45.069494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.172 [2024-04-15 18:18:45.069708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.069752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.069940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.070143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.070186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.070346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.070547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.070589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.070742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.070901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.070939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.071114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.071327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.071370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.071566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.071707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.071752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.071923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.072145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.072169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.072330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.072514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.072557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.072754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.072907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.072937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.073113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.073290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.073314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.073495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.073643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.073687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.073844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.074036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.074090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.074310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.074512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.074557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.074738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.074945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.074969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.075163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.075376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.075406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.075562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.075773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.075817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.076037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.076196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.076239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.076422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.076635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.076683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.076874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.077023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.077074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.077239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.077422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.077451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.077660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.077823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.077867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.078044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.078244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.078288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.078424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.078643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.078685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.078877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.079025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.079070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.079242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.079426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.079476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.079663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.079890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.079933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.080088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.080291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.080338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.080509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.080706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.080747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.080931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.081104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.173 [2024-04-15 18:18:45.081145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.173 qpair failed and we were unable to recover it. 00:31:56.173 [2024-04-15 18:18:45.081350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.081518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.081560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.081753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.081939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.081962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.082159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.082341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.082370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.082654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.082957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.082980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.083218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.083432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.083477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.083733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.083988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.084012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.084233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.084358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.084402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.084639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.084867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.084909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.085138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.085336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.085379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.085671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.085926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.085968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.086243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.086466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.086509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.086653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.086839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.086868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.087087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.087273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.087303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.087541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.087728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.087770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.087904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.088097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.088122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.088291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.088446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.088496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.088809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.089078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.089116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.089254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.089470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.089512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.089652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.089858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.089887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.090082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.090256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.090295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.174 [2024-04-15 18:18:45.090557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.090723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.174 [2024-04-15 18:18:45.090774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.174 qpair failed and we were unable to recover it. 00:31:56.175 [2024-04-15 18:18:45.091071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.175 [2024-04-15 18:18:45.091297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.175 [2024-04-15 18:18:45.091331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.175 qpair failed and we were unable to recover it. 00:31:56.175 [2024-04-15 18:18:45.091532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.175 [2024-04-15 18:18:45.091702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.175 [2024-04-15 18:18:45.091744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.175 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.091950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.092147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.092172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.092344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.092556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.092598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.092751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.092982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.093006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.093214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.093405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.093448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.093657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.093859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.093903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.094095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.094256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.094298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.094516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.094729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.094778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.094952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.095152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.095176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.095389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.095617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.095659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.095861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.096044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.096087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.096246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.096441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.096483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.096652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.096896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.096938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.097135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.097316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.097361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.097574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.097774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.097816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.097951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.098154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.098197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.098376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.098609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.098651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.098852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.099108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.099133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.099389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.099560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.099602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.099845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.100115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.100140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.100326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.100541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.100584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.100741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.100912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.100936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.101108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.101268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.101310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.101506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.101684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.101726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.101944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.102097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.102122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.102276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.102455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.102484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.102706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.102902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.102925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.446 qpair failed and we were unable to recover it. 00:31:56.446 [2024-04-15 18:18:45.103164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.446 [2024-04-15 18:18:45.103366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.103408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.103647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.103945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.103968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.104258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.104529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.104572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.104816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.105081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.105121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.105287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.105515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.105557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.105718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.105985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.106008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.106257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.106469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.106512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.106724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.106901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.106924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.107170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.107345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.107387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.107587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.107849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.107891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.108094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.108296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.108337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.108564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.108788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.108830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.108996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.109159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.109196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.109387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.109640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.109684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.109944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.110203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.110227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.110459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.110772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.110825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.111047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.111196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.111220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.111418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.111608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.111650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.111840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.112036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.112065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.112298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.112579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.112622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.112865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.113100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.113138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.113369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.113594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.113636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.113885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.114139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.114165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.114422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.114672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.114714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.115016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.115287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.115311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.115525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.115770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.115812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.115995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.116178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.116202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.116423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.116566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.116607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.447 qpair failed and we were unable to recover it. 00:31:56.447 [2024-04-15 18:18:45.116815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.447 [2024-04-15 18:18:45.117064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.117087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.117266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.117513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.117556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.117816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.118032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.118087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.118296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.118510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.118562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.118811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.119097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.119122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.119385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.119594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.119637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.119869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.120097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.120121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.120359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.120625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.120667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.120871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.121067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.121090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.121255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.121387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.121429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.121640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.121840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.121882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.122046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.122221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.122260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.122553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.122774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.122816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.123090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.123273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.123297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.123480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.123663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.123706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.123849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.124085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.124108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.124263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.124458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.124499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.124771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.124952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.124975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.125228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.125446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.125488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.125733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.125998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.126021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.126302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.126523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.126565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.126702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.126963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.127006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.127285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.127459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.127502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.127795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.128040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.128083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.128257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.128424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.128465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.128652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.128790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.128832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.448 qpair failed and we were unable to recover it. 00:31:56.448 [2024-04-15 18:18:45.129004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.129233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.448 [2024-04-15 18:18:45.129258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.129422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.129608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.129651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.129813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.130039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.130089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.130306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.130523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.130565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.130731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.130890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.130913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.131120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.131250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.131292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.131504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.131682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.131724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.131970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.132117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.132148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.132406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.132651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.132693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.132889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.133050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.133094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.133280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.133458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.133499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.133708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.133982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.134025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.134344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.134628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.134670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.134850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.135111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.135136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.135288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.135481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.135522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.135780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.136013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.136036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.136275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.136460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.136502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.136657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.136833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.136881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.137099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.137288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.137343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.137560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.137841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.137885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.138092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.138348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.138386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.138608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.138840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.138882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.139056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.139259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.139283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.139511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.139699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.139740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.139962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.140232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.140258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.140541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.140706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.140735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.140986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.141193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.141225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.449 qpair failed and we were unable to recover it. 00:31:56.449 [2024-04-15 18:18:45.141506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.141762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.449 [2024-04-15 18:18:45.141809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.142013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.142204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.142230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.142491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.142676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.142718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.142879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.143124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.143149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.143433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.143656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.143705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.143899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.144187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.144211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.144402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.144581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.144623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.144824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.145086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.145110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.145258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.145434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.145475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.145637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.145828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.145871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.146098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.146278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.146325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.146621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.146836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.146878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.147074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.147246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.147269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.147524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.147744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.147785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.147977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.148163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.148186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.148456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.148735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.148778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.148987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.149231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.149256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.149437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.149617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.149659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.149899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.150045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.150099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.150367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.150583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.150624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.150839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.151033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.151066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.151339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.151610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.151653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.450 qpair failed and we were unable to recover it. 00:31:56.450 [2024-04-15 18:18:45.151882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.450 [2024-04-15 18:18:45.152091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.152116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.152322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.152574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.152617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.152891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.153092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.153131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.153378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.153607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.153650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.153847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.154024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.154046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.154306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.154594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.154637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.154851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.155105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.155129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.155303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.155464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.155506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.155690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.155917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.155967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.156237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.156423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.156465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.156618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.156884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.156928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.157132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.157311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.157357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.157550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.157751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.157794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.158017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.158220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.158244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.158467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.158701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.158744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.158895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.159035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.159064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.159253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.159478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.159522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.159708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.159930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.159974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.160247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.160427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.160468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.160703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.160848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.160871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.161056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.161312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.161337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.161573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.161767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.161809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.162015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.162237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.162281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.162505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.162694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.162735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.162936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.163099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.163123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.163295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.163467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.163509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.163794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.164043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.164087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.451 [2024-04-15 18:18:45.164304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.164491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.451 [2024-04-15 18:18:45.164532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.451 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.164687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.164877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.164919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.165073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.165263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.165306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.165478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.165686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.165728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.165895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.166113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.166138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.166356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.166541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.166583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.166761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.166980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.167002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.167255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.167435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.167477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.167658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.167939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.167980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.168190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.168345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.168385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.168518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.168750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.168792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.169076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.169322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.169364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.169576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.169786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.169827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.170010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.170182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.170206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.170366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.170600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.170642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.170896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.171176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.171200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.171416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.171605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.171647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.171783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.171977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.172001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.172219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.172424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.172465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.172672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.173721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.173750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.174003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.174240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.174283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.174487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.174668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.174711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.174947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.175124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.175164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.175437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.175687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.175728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.175889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.176152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.176177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.176368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.176607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.176650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.176856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.177037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.177125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.177349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.177532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.177585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.452 [2024-04-15 18:18:45.178451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.178690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.452 [2024-04-15 18:18:45.178733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.452 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.178939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.179159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.179184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.179378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.179560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.179603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.180403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.180613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.180657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.180866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.181044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.181089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.181256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.181469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.181512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.181690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.181896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.181919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.182084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.182885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.182913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.183169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.183408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.183451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.183658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.183862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.183887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.184066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.184237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.184280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.184465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.184681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.184723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.184923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.185148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.185190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.185380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.185575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.185604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.185820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.185991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.186016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.186183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.186373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.186402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.186611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.186805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.186848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.187022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.187220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.187265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.187486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.187675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.187716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.187898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.188096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.188123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.188290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.188512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.188554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.188761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.189002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.189028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.189195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.189364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.189408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.189615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.189813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.189855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.190034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.190224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.190250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.190444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.190636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.190676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.190904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.191135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.191180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.191310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.191493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.191536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.453 qpair failed and we were unable to recover it. 00:31:56.453 [2024-04-15 18:18:45.191688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.453 [2024-04-15 18:18:45.192547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.192574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.192762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.192934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.192958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.193142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.193318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.193348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.193552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.193739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.193782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.193951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.194080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.194107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.194262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.194466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.194508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.194680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.194829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.194860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.195090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.195250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.195294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.195533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.195826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.195869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.196093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.196249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.196293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.196496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.196701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.196744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.196916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.197078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.197113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.197264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.197444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.197486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.197620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.197855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.197879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.198077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.198230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.198273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.198465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.198660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.198703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.198930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.199179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.199223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.199392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.199614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.199656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.199910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.200169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.200213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.200398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.200591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.200635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.200865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.201087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.201126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.201257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.201453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.201483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.201667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.201933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.201972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.202199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.202395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.202438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.202612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.202860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.202903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.203138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.203320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.203349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.203574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.203774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.203828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.454 qpair failed and we were unable to recover it. 00:31:56.454 [2024-04-15 18:18:45.204084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.454 [2024-04-15 18:18:45.204251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.204295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.204442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.204629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.204681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.204871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.205118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.205163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.205335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.205521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.205564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.205908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.206165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.206211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.206358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.206565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.206609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.206809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.206979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.207006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.207152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.207337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.207378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.207624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.207819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.207873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.208126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.208297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.208346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.208589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.208843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.208885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.209152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.209343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.209372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.209567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.209765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.209808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.209985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.210189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.210216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.210421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.210601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.210644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.210856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.211110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.211137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.211282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.211455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.211484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.211734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.211935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.211959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.212099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.212270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.212314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.212577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.212783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.212839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.213117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.213266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.213322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.455 [2024-04-15 18:18:45.213512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.213735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.455 [2024-04-15 18:18:45.213778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.455 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.213947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.214170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.214215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.214392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.214657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.214701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.214944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.215192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.215238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.215467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.215657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.215700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.215947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.216137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.216167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.216338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.216572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.216612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.216873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.217108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.217136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.217282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.217478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.217524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.217737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.218007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.218032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.218216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.218413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.218465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.218756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.219007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.219032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.219216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.219434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.219477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.219671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.219847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.219891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.220171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.220389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.220430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.220707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.220928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.220952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.221146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.221293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.221340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.221592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.221859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.221902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.222080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.222255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.222303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.222609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.222873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.222924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.223153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.223327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.223356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.223583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.223750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.223792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.224069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.224242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.224286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.224546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.224874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.224922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.225207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.225361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.225403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.225628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.225861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.225904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.226101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.226247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.226291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.226476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.226734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.226776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.226934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.227166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.227215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.456 qpair failed and we were unable to recover it. 00:31:56.456 [2024-04-15 18:18:45.227426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.456 [2024-04-15 18:18:45.227666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.227708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.227876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.228022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.228067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.228258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.228443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.228485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.228694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.228866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.228890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.229091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.229303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.229332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.229604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.229840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.229882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.230103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.230264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.230291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.230478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.230679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.230723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.230930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.231155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.231182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.231347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.231616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.231660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.231875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.232124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.232153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.232397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.232628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.232672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.232909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.233123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.233161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.233346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.233548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.233589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.233785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.234005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.234029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.234233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.234408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.234450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.234646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.234871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.234913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.235149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.235369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.235425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.235614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.235823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.235870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.236119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.236426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.236470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.236743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.236945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.236969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.237147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.237330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.237374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.237575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.237785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.237837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.238033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.238218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.238244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.238474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.238759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.238802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.239069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.239300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.239327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.239559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.239733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.239776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 [2024-04-15 18:18:45.240005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.240188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.457 [2024-04-15 18:18:45.240216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.457 qpair failed and we were unable to recover it. 00:31:56.457 Read completed with error (sct=0, sc=8) 00:31:56.457 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Write completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Write completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Write completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Write completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Write completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Write completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Write completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Write completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Write completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Read completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Write completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Write completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 Write completed with error (sct=0, sc=8) 00:31:56.458 starting I/O failed 00:31:56.458 [2024-04-15 18:18:45.240617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:56.458 [2024-04-15 18:18:45.240756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19069d0 is same with the state(5) to be set 00:31:56.458 [2024-04-15 18:18:45.241001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.241268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.241300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.458 qpair failed and we were unable to recover it. 00:31:56.458 [2024-04-15 18:18:45.241556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.241766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.241796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.458 qpair failed and we were unable to recover it. 00:31:56.458 [2024-04-15 18:18:45.241976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.242159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.242187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.458 qpair failed and we were unable to recover it. 00:31:56.458 [2024-04-15 18:18:45.242444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.242705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.242735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.458 qpair failed and we were unable to recover it. 00:31:56.458 [2024-04-15 18:18:45.242899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.243109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.243137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.458 qpair failed and we were unable to recover it. 00:31:56.458 [2024-04-15 18:18:45.243298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.243449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.243489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.458 qpair failed and we were unable to recover it. 00:31:56.458 [2024-04-15 18:18:45.243745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.243991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.244021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.458 qpair failed and we were unable to recover it. 00:31:56.458 [2024-04-15 18:18:45.244303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.244616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.244665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.458 qpair failed and we were unable to recover it. 00:31:56.458 [2024-04-15 18:18:45.244906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.245181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.245208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.458 qpair failed and we were unable to recover it. 00:31:56.458 [2024-04-15 18:18:45.245439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.245622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.245651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.458 qpair failed and we were unable to recover it. 00:31:56.458 [2024-04-15 18:18:45.245825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.246038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.246074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.458 qpair failed and we were unable to recover it. 00:31:56.458 [2024-04-15 18:18:45.246237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.246389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.246428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.458 qpair failed and we were unable to recover it. 00:31:56.458 [2024-04-15 18:18:45.246578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.246803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.246837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.458 qpair failed and we were unable to recover it. 00:31:56.458 [2024-04-15 18:18:45.247086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.247263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.247290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.458 qpair failed and we were unable to recover it. 00:31:56.458 [2024-04-15 18:18:45.247431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.247590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.247629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.458 qpair failed and we were unable to recover it. 00:31:56.458 [2024-04-15 18:18:45.247868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.248055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.248103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.458 qpair failed and we were unable to recover it. 00:31:56.458 [2024-04-15 18:18:45.248325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.248599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.248655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.458 qpair failed and we were unable to recover it. 00:31:56.458 [2024-04-15 18:18:45.248875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.458 [2024-04-15 18:18:45.249135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.249162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.249370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.249554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.249583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.249806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.250012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.250050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.250337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.250590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.250641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.250868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.251055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.251106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.251368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.251511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.251540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.251701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.251862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.251919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.252150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.252373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.252402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.252577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.252901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.252973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.253247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.253505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.253558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.253820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.254000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.254029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.254261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.254457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.254505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.254675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.254812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.254861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.255073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.255285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.255312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.255561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.255742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.255807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.255994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.256153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.256180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.256387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.256561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.256613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.256796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.256975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.257003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.257194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.257455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.257512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.257734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.257889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.257922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.258092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.258280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.258306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.258503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.258630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.258668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.258940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.259104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.259148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.259372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.259620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.259673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.259957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.260165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.260207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.260412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.260627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.260674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.260896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.261103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.261133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.261395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.261664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.261712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.459 [2024-04-15 18:18:45.261980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.262192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.459 [2024-04-15 18:18:45.262222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.459 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.262438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.262574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.262634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.262891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.263068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.263097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.263268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.263464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.263493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.263779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.264045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.264080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.264335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.264515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.264544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.264792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.264943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.264973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.265212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.265356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.265385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.265560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.265836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.265883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.266133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.266390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.266447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.266692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.266848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.266896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.267116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.267292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.267322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.267487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.267629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.267668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.267852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.268017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.268047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.268357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.268573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.268603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.268840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.269079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.269109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.269360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.269541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.269589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.269862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.270048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.270083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.270265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.270422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.270455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.270667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.270844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.270873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.271049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.271220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.271247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.271414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.271618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.271647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.271852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.272066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.272107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.460 [2024-04-15 18:18:45.272299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.272480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.460 [2024-04-15 18:18:45.272522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.460 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.272724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.272928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.272957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.273127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.273268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.273297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.273466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.273594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.273619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.273802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.273963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.273992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.274799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.274977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.275008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.275165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.276009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.276044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.276236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.276386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.276416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.276563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.276706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.276735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.276920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.277110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.277137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.277268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.277468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.277497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.277655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.277820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.277850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.278011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.278190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.278220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.278388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.278554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.278584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.278757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.278962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.278991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.279161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.279331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.279372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.279549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.279729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.279755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.279931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.280140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.280170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.280322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.280501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.280526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.280671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.280837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.280862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.281049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.281227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.281253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.281443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.281635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.281660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.281847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.282004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.282033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.282192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.282343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.282384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.282538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.282724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.282749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.282941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.283146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.283175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.283313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.283478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.283507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.283712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.283848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.283872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.461 [2024-04-15 18:18:45.284084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.284360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.461 [2024-04-15 18:18:45.284403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.461 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.284596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.284765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.284795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.284965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.285137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.285164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.285312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.285479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.285509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.285676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.285849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.285880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.286053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.286215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.286242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.286376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.286555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.286586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.286730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.286873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.286903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.287079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.287232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.287259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.287449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.287611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.287659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.287812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.287948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.287977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.288136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.288266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.288294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.288442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.288603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.288633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.288772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.288939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.288969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.289141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.289273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.289299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.289519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.289720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.289750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.289919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.290094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.290139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.290317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.290494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.290520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.290680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.290836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.290861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.291014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.291180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.291207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.291354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.291511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.291536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.291739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.291901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.291944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.292109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.292264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.292291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.292468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.292631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.292664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.292880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.293106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.293134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.293271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.293404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.293445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.293587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.293804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.293867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.294055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.294209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.294235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.462 qpair failed and we were unable to recover it. 00:31:56.462 [2024-04-15 18:18:45.294441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.462 [2024-04-15 18:18:45.294575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.294623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.294811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.294970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.294996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.295150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.295314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.295341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.295531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.295727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.295757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.295949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.296094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.296122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.296254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.296420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.296450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.296644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.296824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.296853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.297074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.297243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.297270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.297447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.297624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.297650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.297852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.298010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.298037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.298229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.298405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.298433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.298582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.298746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.298792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.298974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.299195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.299224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.299392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.299594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.299622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.299778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.299965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.299992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.300196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.300369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.300413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.300603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.300755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.300799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.300967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.301157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.301184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.301331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.301528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.301581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.301777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.301946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.301971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.302167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.302394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.302436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.302621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.302797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.302837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.303012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.303175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.303220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.303403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.303618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.303663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.303887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.304053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.304086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.304255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.304430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.304460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.304624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.304771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.304815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.304969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.305103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.463 [2024-04-15 18:18:45.305130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.463 qpair failed and we were unable to recover it. 00:31:56.463 [2024-04-15 18:18:45.305293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.305483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.305507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.305657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.305838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.305864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.306083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.306220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.306266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.306404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.306552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.306595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.306724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.306904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.306930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.307133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.307303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.307333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.307472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.307624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.307650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.307807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.307927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.307951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.308128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.308266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.308293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.308470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.308627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.308652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.308862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.308977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.309001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.309164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.309338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.309365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.309513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.309654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.309697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.309898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.310082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.310110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.310276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.310487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.310532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.310743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.310910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.310938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.311072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.311235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.311281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.311453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.311638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.311681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.311867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.312023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.312068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.312259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.312485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.312539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.312693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.312859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.312897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.313073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.313237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.313281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.313503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.313688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.313731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.313919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.314051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.314120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.314294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.314506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.314550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.314706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.314880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.314908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.464 qpair failed and we were unable to recover it. 00:31:56.464 [2024-04-15 18:18:45.315053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.464 [2024-04-15 18:18:45.315237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.315281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.315425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.315588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.315635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.315800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.315959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.315983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.316153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.316347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.316384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.316625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.316807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.316831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.316952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.317134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.317165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.317352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.317526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.317549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.317754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.317930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.317954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.318127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.318275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.318302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.318517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.318728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.318773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.318954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.319137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.319164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.319328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.319504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.319546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.319744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.319870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.319894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.320075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.320256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.320282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.320433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.320595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.320638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.320827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.320978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.321016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.321184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.321362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.321404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.321560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.321719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.321762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.321903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.322093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.322122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.322270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.322459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.322493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.322678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.322833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.322856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.323026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.323212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.323257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.323450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.323605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.465 [2024-04-15 18:18:45.323648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.465 qpair failed and we were unable to recover it. 00:31:56.465 [2024-04-15 18:18:45.323833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.323980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.324004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.324209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.324369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.324398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.324591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.324814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.324844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.325021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.325208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.325252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.325434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.325619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.325662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.325812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.325982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.326007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.326193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.326366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.326396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.326579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.326773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.326816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.326971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.327180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.327207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.327415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.327591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.327633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.327774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.327935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.327959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.328152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.328332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.328377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.328585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.328742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.328785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.328963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.329162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.329193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.329392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.329608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.329651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.329784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.329950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.329974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.330161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.330349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.330392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.330592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.330786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.330829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.330963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.331133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.331163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.331381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.331548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.331591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.331731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.331909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.331934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.332129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.332294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.332320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.332504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.332715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.332755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.332906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.333083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.333110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.333295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.333491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.333533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.333682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.333844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.333885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.334073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.334222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.334264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.334424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.334647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.466 [2024-04-15 18:18:45.334677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.466 qpair failed and we were unable to recover it. 00:31:56.466 [2024-04-15 18:18:45.334868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.335069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.335095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.335248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.335393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.335436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.335624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.335792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.335835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.336052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.336246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.336272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.336492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.336657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.336699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.336842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.337035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.337064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.337250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.337439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.337482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.337665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.337857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.337903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.338074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.338208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.338236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.338419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.338608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.338650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.338864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.339056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.339113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.339342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.339493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.339536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.339725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.339882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.339922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.340066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.340289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.340333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.340510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.340722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.340765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.340941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.341126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.341157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.341365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.341593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.341636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.341802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.342020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.342044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.342268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.342501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.342542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.342764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.342932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.342956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.343129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.343280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.343323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.343531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.343689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.343733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.343920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.344129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.344154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.344357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.344528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.344563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.344713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.344913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.344938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.345141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.345305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.345330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.345549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.345690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.345732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.345939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.346081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.467 [2024-04-15 18:18:45.346122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.467 qpair failed and we were unable to recover it. 00:31:56.467 [2024-04-15 18:18:45.346255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.346449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.346473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.346640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.346813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.346846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.347132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.347281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.347329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.347578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.347848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.347891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.348139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.348323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.348364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.348583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.348797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.348846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.349073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.349283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.349310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.349568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.349791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.349834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.350106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.350241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.350265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.350507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.350708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.350750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.350980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.351107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.351132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.351347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.351542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.351584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.351830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.352129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.352153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.352380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.352646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.352689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.352927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.353077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.353102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.353279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.353461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.353490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.353711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.353958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.353981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.354217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.354476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.354518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.354694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.354944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.354987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.355183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.355397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.355438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.355693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.355955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.355996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.356187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.356367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.356409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.356617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.356859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.356900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.357159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.357360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.357403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.357650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.357850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.357893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.358045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.358203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.468 [2024-04-15 18:18:45.358227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.468 qpair failed and we were unable to recover it. 00:31:56.468 [2024-04-15 18:18:45.358495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.358725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.358767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.358988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.359148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.359172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.359408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.359675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.359718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.359997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.360195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.360220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.360362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.360548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.360599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.360817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.361087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.361112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.361317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.361519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.361560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.361747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.361895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.361932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.362120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.362376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.362419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.362651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.362909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.362952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.363198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.363467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.363510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.363721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.363924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.363947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.364167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.364372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.364401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.364672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.364903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.364944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.365195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.365401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.365442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.365676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.365855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.365898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.366056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.366266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.366308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.366526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.366740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.366782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.366950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.367196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.367238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.367434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.367636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.367679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.367869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.368100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.368124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.368309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.368510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.368552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.368813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.369002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.369025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.369218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.369427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.369456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.369659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.369864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.369915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.370111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.370290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.370332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.370547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.370775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.370818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.469 qpair failed and we were unable to recover it. 00:31:56.469 [2024-04-15 18:18:45.371025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.469 [2024-04-15 18:18:45.371257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.371282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.371469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.371656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.371697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.371870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.372030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.372072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.372275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.372456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.372499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.372689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.372867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.372910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.373069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.373258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.373302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.373458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.373689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.373731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.373973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.374220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.374245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.374438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.374673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.374715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.374982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.375207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.375232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.375462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.375668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.375710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.375907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.376081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.376133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.376277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.376503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.376544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.376707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.376885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.376922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.377152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.377422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.377465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.377702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.377941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.377963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.378156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.378370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.378412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.378597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.378842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.378885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.379117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.379254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.379300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.379536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.379801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.379844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.380043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.380195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.380219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.380412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.380617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.380658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.470 qpair failed and we were unable to recover it. 00:31:56.470 [2024-04-15 18:18:45.380895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.470 [2024-04-15 18:18:45.381173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.381197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.471 qpair failed and we were unable to recover it. 00:31:56.471 [2024-04-15 18:18:45.381389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.381580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.381622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.471 qpair failed and we were unable to recover it. 00:31:56.471 [2024-04-15 18:18:45.381779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.382001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.382023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.471 qpair failed and we were unable to recover it. 00:31:56.471 [2024-04-15 18:18:45.382212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.382362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.382404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.471 qpair failed and we were unable to recover it. 00:31:56.471 [2024-04-15 18:18:45.382619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.382816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.382856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.471 qpair failed and we were unable to recover it. 00:31:56.471 [2024-04-15 18:18:45.383106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.383296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.383347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.471 qpair failed and we were unable to recover it. 00:31:56.471 [2024-04-15 18:18:45.383571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.383790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.383845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.471 qpair failed and we were unable to recover it. 00:31:56.471 [2024-04-15 18:18:45.384128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.384319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.384342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.471 qpair failed and we were unable to recover it. 00:31:56.471 [2024-04-15 18:18:45.384566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.384746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.384788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.471 qpair failed and we were unable to recover it. 00:31:56.471 [2024-04-15 18:18:45.384961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.385139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.385177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.471 qpair failed and we were unable to recover it. 00:31:56.471 [2024-04-15 18:18:45.385346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.385523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.385565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.471 qpair failed and we were unable to recover it. 00:31:56.471 [2024-04-15 18:18:45.385783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.386102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.386126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.471 qpair failed and we were unable to recover it. 00:31:56.471 [2024-04-15 18:18:45.386317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.386521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.471 [2024-04-15 18:18:45.386563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.471 qpair failed and we were unable to recover it. 00:31:56.471 [2024-04-15 18:18:45.386720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.386944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.386967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.741 qpair failed and we were unable to recover it. 00:31:56.741 [2024-04-15 18:18:45.387200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.387398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.387440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.741 qpair failed and we were unable to recover it. 00:31:56.741 [2024-04-15 18:18:45.387653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.387833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.387862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.741 qpair failed and we were unable to recover it. 00:31:56.741 [2024-04-15 18:18:45.388017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.388229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.388258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.741 qpair failed and we were unable to recover it. 00:31:56.741 [2024-04-15 18:18:45.388450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.388690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.388732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.741 qpair failed and we were unable to recover it. 00:31:56.741 [2024-04-15 18:18:45.388898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.389117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.389161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.741 qpair failed and we were unable to recover it. 00:31:56.741 [2024-04-15 18:18:45.389338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.389607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.389649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.741 qpair failed and we were unable to recover it. 00:31:56.741 [2024-04-15 18:18:45.389828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.390046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.390093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.741 qpair failed and we were unable to recover it. 00:31:56.741 [2024-04-15 18:18:45.390264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.390438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.390466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.741 qpair failed and we were unable to recover it. 00:31:56.741 [2024-04-15 18:18:45.390683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.390823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.390866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.741 qpair failed and we were unable to recover it. 00:31:56.741 [2024-04-15 18:18:45.391081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.391290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.391314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.741 qpair failed and we were unable to recover it. 00:31:56.741 [2024-04-15 18:18:45.391498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.391714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.391743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.741 qpair failed and we were unable to recover it. 00:31:56.741 [2024-04-15 18:18:45.391987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.392307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.741 [2024-04-15 18:18:45.392332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.741 qpair failed and we were unable to recover it. 00:31:56.741 [2024-04-15 18:18:45.392565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.392776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.392824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.393022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.393187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.393211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.393417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.393689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.393730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.393945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.394136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.394161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.394432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.394686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.394728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.394908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.395136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.395160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.395421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.395746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.395789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.396056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.396265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.396288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.396428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.396621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.396662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.396947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.397174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.397199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.397462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.397736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.397777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.397989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.398176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.398200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.398383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.398574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.398603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.398814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.399006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.399036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.399301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.399563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.399592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.399862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.400007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.400030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.400219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.400444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.400493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.400784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.400976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.400998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.401232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.401404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.401446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.401601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.401779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.401808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.401949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.402125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.402155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.402317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.402531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.402577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.402833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.402994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.403017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.403346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.403661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.403702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.403931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.404198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.404222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.404504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.404732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.404774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.405009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.405259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.405283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.742 qpair failed and we were unable to recover it. 00:31:56.742 [2024-04-15 18:18:45.405555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.742 [2024-04-15 18:18:45.405877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.405919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.406085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.406272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.406322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.406610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.406882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.406923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.407157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.407309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.407351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.407513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.407690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.407733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.407884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.408010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.408033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.408214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.408394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.408436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.408684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.408869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.408912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.409153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.409370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.409419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.409653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.409845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.409897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.410170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.410416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.410456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.410680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.410903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.410946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.411163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.411366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.411419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.411612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.411850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.411891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.412120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.412281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.412326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.412563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.412794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.412836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.413089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.413328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.413353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.413636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.413876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.413918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.414176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.414406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.414448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.414689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.414902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.414951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.415194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.415389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.415432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.415637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.415868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.415909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.416116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.416294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.416336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.416595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.416864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.416907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.743 qpair failed and we were unable to recover it. 00:31:56.743 [2024-04-15 18:18:45.417164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.743 [2024-04-15 18:18:45.417351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.417392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.417602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.417845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.417887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.418148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.418380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.418422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.418663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.418928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.418969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.419171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.419383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.419426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.419634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.419832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.419874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.420089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.420313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.420336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.420500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.420709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.420751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.420883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.421032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.421055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.421281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.421495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.421538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.421713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.421928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.421951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.422183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.422316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.422366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.422662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.422920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.422962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.423146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.423379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.423422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.423587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.423849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.423891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.424134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.424343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.424384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.424601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.424862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.424904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.425108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.425290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.425313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.425570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.425795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.425824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.426053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.426258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.426282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.426498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.426712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.426741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.426946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.427141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.427165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.427345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.427539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.427582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.427870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.428122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.428145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.428294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.428478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.428521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.428668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.428865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.428906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.429131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.429332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.744 [2024-04-15 18:18:45.429374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.744 qpair failed and we were unable to recover it. 00:31:56.744 [2024-04-15 18:18:45.429610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.429913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.429955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.430215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.430420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.430462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.430642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.430829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.430870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.431080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.431271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.431295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.431561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.431806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.431847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.432125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.432294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.432317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.432505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.432694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.432736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.432935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.433145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.433168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.433428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.433740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.433783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.433998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.434287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.434312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.434579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.434773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.434816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.435026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.435248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.435272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.435474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.435677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.435719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.435938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.436135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.436159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.436380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.436674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.436717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.436945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.437215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.437239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.437468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.437687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.437739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.437978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.438188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.438211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.438379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.438557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.438598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.438829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.439081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.439105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.439277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.439455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.439497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.439708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.439919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.439960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.440138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.440346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.440388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.440576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.440851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.440894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.441143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.441323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.441365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.441578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.441857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.441900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.442173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.442360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.442401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.442666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.442926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.442968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.745 qpair failed and we were unable to recover it. 00:31:56.745 [2024-04-15 18:18:45.443174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.745 [2024-04-15 18:18:45.443378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.443431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.443625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.443816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.443858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.444103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.444284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.444326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.444596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.444764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.444806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.444957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.445192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.445234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.445510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.445806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.445848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.446153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.446474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.446516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.446795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.446993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.447015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.447206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.447427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.447470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.447674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.447858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.447900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.448068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.448205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.448229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.448407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.448632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.448674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.448864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.449051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.449081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.449344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.449604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.449646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.449905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.450146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.450188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.450451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.450677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.450719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.450924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.451090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.451114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.451333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.451561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.451603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.451823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.452045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.452087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.452218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.452450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.452491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.452677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.452893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.452933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.453094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.453269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.453311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.453504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.453710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.453752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.746 qpair failed and we were unable to recover it. 00:31:56.746 [2024-04-15 18:18:45.454026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.746 [2024-04-15 18:18:45.454323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.454348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.454564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.454859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.454902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.455147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.455303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.455351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.455580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.455759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.455801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.455985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.456159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.456189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.456382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.456592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.456640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.456930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.457137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.457179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.457373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.457589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.457640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.457883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.458094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.458118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.458427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.458692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.458734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.459002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.459363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.459418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.459774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.460073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.460098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.460357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.460592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.460647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.460960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.461168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.461192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.461418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.461666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.461710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.462005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.462216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.462249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.462426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.462713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.462764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.462971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.463157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.463181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.463430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.463742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.463793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.464078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.464284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.464308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.464589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.464807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.464854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.465075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.465293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.465316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.465504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.465761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.465815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.466046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.466209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.466247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.466480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.466694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.466743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.466967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.467149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.467173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.467375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.467582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.467632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.467927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.468217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.468241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.468513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.468811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.468863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.747 qpair failed and we were unable to recover it. 00:31:56.747 [2024-04-15 18:18:45.469189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.469479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.747 [2024-04-15 18:18:45.469530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.469851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.470153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.470177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.470464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.470639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.470691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.470883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.471149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.471178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.471406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.471621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.471673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.471897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.472136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.472159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.472385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.472580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.472633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.472866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.473131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.473154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.473342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.473589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.473638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.473870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.474069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.474093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.474312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.474506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.474558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.474861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.475128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.475153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.475375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.475579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.475630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.475812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.475996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.476019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.476249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.476522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.476574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.476789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.477023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.477080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.477355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.477580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.477630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.477867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.478048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.478078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.478289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.478518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.478567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.478840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.479161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.479187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.479513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.479744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.479793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.480050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.480202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.480224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.480437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.480651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.480698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.480992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.481221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.481244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.481499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.481733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.481783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.482004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.482178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.482200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.482376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.482560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.482603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.482775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.482966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.482990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.483196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.483373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.483416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.748 [2024-04-15 18:18:45.483638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.483823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.748 [2024-04-15 18:18:45.483867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.748 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.484053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.484223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.484247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.484435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.484615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.484657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.484831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.485053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.485097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.485233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.485398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.485444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.485586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.485748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.485790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.485948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.486116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.486155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.486323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.486470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.486514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.486689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.486873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.486897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.487101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.487278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.487320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.487512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.487672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.487715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.487900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.488037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.488082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.488257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.488434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.488476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.488643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.488788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.488834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.489007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.489166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.489211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.489400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.489597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.489639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.489853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.490035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.490072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.490298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.490488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.490532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.490673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.490892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.490934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.491119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.491274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.491317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.491527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.491735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.491778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.491943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.492100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.492125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.492337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.492504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.492548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.492743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.492925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.492948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.493123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.493301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.493345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.493502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.493668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.493700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.493893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.494099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.494124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.494318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.494502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.494546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.494739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.494932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.494955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.749 [2024-04-15 18:18:45.495138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.495375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.749 [2024-04-15 18:18:45.495418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.749 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.495625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.495811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.495841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.496035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.496216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.496240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.496404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.496585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.496615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.496832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.497019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.497042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.497253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.497413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.497457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.497678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.497903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.497946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.498160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.498381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.498425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.498592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.498781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.498822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.499021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.499222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.499267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.499463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.499690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.499733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.499914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.500118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.500143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.500291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.500468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.500510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.500713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.500879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.500903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.501034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.501188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.501231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.501444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.501658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.501700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.501920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.502085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.502126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.502285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.502510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.502553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.502711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.502929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.502952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.503147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.503332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.503375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.503575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.503773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.503804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.504013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.504193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.504237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.504395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.504599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.504643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.504815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.504988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.505011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.505232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.505439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.505482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.505670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.505882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.505926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.750 qpair failed and we were unable to recover it. 00:31:56.750 [2024-04-15 18:18:45.506134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.750 [2024-04-15 18:18:45.506322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.506352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.506544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.506732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.506775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.506985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.507163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.507196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.507401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.507579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.507622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.507781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.507974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.507997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.508167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.508362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.508404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.508601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.508813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.508860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.509029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.509220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.509265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.509476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.509644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.509686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.509891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.510035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.510065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.510231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.510414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.510457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.510613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.510798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.510842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.511027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.511230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.511274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.511443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.511622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.511664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.511807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.511956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.511980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.512170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.512401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.512444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.512622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.512832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.512874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.513012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.513232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.513274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.513431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.513612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.513658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.513847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.513996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.514035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.514213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.514383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.514426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.514566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.514753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.514799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.514960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.515119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.515164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.515381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.515534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.515576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.515751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.515956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.515980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.516215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.516425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.516467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.516608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.516791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.516819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.516992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.517163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.517206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.517346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.517572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.517616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.751 qpair failed and we were unable to recover it. 00:31:56.751 [2024-04-15 18:18:45.517799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.751 [2024-04-15 18:18:45.518019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.518042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.518174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.518341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.518382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.518547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.518694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.518717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.518894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.519064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.519089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.519289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.519487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.519529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.519730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.519875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.519913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.520119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.520299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.520340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.520506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.520695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.520738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.520903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.521042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.521089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.521275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.521465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.521507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.521678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.521894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.521937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.522158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.522344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.522392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.522574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.522765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.522795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.522948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.523126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.523156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.523352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.523555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.523577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.523754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.523953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.523976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.524212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.524402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.524445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.524605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.524772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.524812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.525000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.525163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.525206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.525352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.525538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.525580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.525725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.525951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.525974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.526142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.526301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.526345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.526505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.526702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.526744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.526880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.527070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.527095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.527318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.527533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.527574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.527762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.527946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.527970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.528174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.528348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.528390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.528558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.528786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.528840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.529002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.529147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.529171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.752 qpair failed and we were unable to recover it. 00:31:56.752 [2024-04-15 18:18:45.529375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.752 [2024-04-15 18:18:45.529556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.529599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.529756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.529911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.529934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.530104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.530309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.530359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.530582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.530772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.530815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.530988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.531128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.531153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.531314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.531522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.531563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.531726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.531878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.531903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.532080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.532251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.532294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.532457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.532638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.532684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.532852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.533022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.533066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.533272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.533486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.533529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.533700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.533920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.533957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.534129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.534341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.534388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.534583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.534812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.534855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.534991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.535179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.535233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.535414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.535635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.535678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.535841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.536001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.536024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.536276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.536502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.536544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.536739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.536952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.536976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.537163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.537348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.537377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.537528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.537714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.537757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.537958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.538121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.538164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.538399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.538606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.538656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.538832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.539005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.539028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.539229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.539403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.539444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.539630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.539860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.539903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.540068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.540210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.540252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.753 qpair failed and we were unable to recover it. 00:31:56.753 [2024-04-15 18:18:45.540461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.753 [2024-04-15 18:18:45.540618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.540661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.540844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.541034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.541056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.541252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.541419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.541461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.541604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.541781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.541823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.541998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.542169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.542213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.542389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.542594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.542637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.542834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.543047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.543091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.543265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.543446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.543489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.543673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.543847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.543889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.544020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.544209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.544233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.544417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.544609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.544649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.544793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.544947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.544971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.545159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.545294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.545332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.545459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.545606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.545629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.545820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.545978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.546001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.546240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.546463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.546507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.546722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.546917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.546954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.547128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.547310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.547339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.547550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.547745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.547788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.547960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.548118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.548162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.548321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.548470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.548511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.548674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.548830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.548853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.549029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.549204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.549234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.549386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.549551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.549594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.549830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.550025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.550048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.550280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.550452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.550481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.550676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.550859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.550904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.551050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.551275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.754 [2024-04-15 18:18:45.551318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.754 qpair failed and we were unable to recover it. 00:31:56.754 [2024-04-15 18:18:45.551467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.551664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.551706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.551867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.552028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.552052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.552240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.552438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.552480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.552650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.552839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.552882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.553035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.553199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.553242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.553433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.553590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.553629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.553824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.553988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.554011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.554192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.554378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.554422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.554624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.554817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.554860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.555072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.555233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.555276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.555486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.555665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.555707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.555899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.556084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.556124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.556335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.556550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.556594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.556769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.556944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.556967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.557202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.557421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.557467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.557656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.557881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.557923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.558112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.558341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.558385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.558525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.558745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.558788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.558979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.559180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.559212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.559373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.559524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.559566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.559726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.559918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.559942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.560151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.560340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.560381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.560575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.560749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.560771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.560944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.561119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.561144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.755 qpair failed and we were unable to recover it. 00:31:56.755 [2024-04-15 18:18:45.561317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.755 [2024-04-15 18:18:45.561549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.561591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.561797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.561954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.561976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.562170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.562381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.562422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.562591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.562767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.562810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.563000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.563156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.563199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.563328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.563516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.563559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.563743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.563930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.563954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.564155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.564314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.564352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.564564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.564731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.564754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.564926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.565123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.565162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.565321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.565537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.565580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.565786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.565952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.565976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.566159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.566350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.566392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.566563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.566753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.566796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.566960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.567127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.567171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.567387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.567606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.567648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.567783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.567941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.567965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.568132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.568327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.568369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.568579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.568773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.568815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.569003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.569179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.569221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.569358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.569574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.569616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.569789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.569996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.570019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.570187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.570332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.570376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.570553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.570768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.570810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.570996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.571165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.571209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.571353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.571511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.571535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.571685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.571878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.571902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.572123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.572312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.572335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.572534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.572680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.572722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.756 qpair failed and we were unable to recover it. 00:31:56.756 [2024-04-15 18:18:45.572916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.756 [2024-04-15 18:18:45.573106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.573129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.573322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.573513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.573556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.573762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.573958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.573981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.574193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.574338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.574367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.574537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.574827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.574869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.575120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.575347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.575389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.575675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.575948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.575990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.576180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.576337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.576379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.576578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.576807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.576849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.577047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.577254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.577278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.577491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.577786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.577830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.578020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.578176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.578200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.578346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.578573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.578602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.578791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.579118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.579142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.579395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.579708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.579751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.580022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.580254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.580278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.580429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.580665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.580706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.580951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.581173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.581196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.581340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.581571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.581600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.581879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.582086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.582120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.582283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.582556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.582597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.582794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.582975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.582998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.583170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.583394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.583434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.583659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.583869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.583921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.584162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.584382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.584432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.584702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.584922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.584945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.585184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.585420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.585461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.757 qpair failed and we were unable to recover it. 00:31:56.757 [2024-04-15 18:18:45.585688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.757 [2024-04-15 18:18:45.585899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.585922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.586093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.586314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.586357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.586602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.586860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.586902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.587074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.587222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.587263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.587532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.587744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.587786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.587962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.588156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.588180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.588430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.588616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.588659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.588828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.589037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.589066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.589288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.589500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.589545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.589764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.589991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.590013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.590250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.590493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.590532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.590762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.590984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.591006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.591242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.591441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.591482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.591632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.591812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.591857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.592070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.592229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.592252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.592423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.592616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.592660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.592824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.592971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.593004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.593254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.593487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.593529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.593765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.593946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.593974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.594278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.594538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.594580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.594782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.594970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.594992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.595171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.595450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.595491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.595700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.595955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.595997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.596181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.596335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.596378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.596553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.596708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.596750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.596916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.597197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.597240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.597447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.597711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.597754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.597928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.598134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.598177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.598385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.598541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.598587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.758 qpair failed and we were unable to recover it. 00:31:56.758 [2024-04-15 18:18:45.598754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.598956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.758 [2024-04-15 18:18:45.598979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.599286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.599570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.599610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.599906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.600184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.600228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.600531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.600840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.600883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.601141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.601337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.601379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.601577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.601802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.601844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.602075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.602271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.602294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.602478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.602748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.602795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.602997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.603244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.603269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.603527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.603724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.603770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.603980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.604151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.604175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.604401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.604543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.604585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.604774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.605040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.605069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.605312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.605569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.605612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.605828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.606042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.606087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.606297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.606418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.606460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.606651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.606824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.606866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.607165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.607374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.607417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.607578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.607775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.607817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.607998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.608155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.608194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.608416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.608661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.608703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.608958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.609168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.609201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.609398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.609644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.609687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.609943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.610171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.610195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.610382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.610582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.610624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.610841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.611067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.611090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.611318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.611478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.611521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.759 [2024-04-15 18:18:45.611747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.612033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.759 [2024-04-15 18:18:45.612056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.759 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.612385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.612581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.612622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.612822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.613010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.613033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.613268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.613462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.613503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.613667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.613846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.613888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.614037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.614253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.614297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.614525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.614731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.614760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.614976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.615186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.615211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.615423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.615591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.615633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.615803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.616013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.616036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.616266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.616418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.616458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.616715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.616967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.616991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.617224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.617405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.617446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.617715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.618004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.618026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.618320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.618573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.618615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.618772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.618999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.619022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.619271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.619435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.619463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.619631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.619831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.619873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.620033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.620190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.620233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.620527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.620783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.620825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.621105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.621398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.621422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.621681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.621912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.621954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.622235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.622440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.622482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.622654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.622965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.623008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.623284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.623536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.623577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.760 [2024-04-15 18:18:45.623833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.624102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.760 [2024-04-15 18:18:45.624125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.760 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.624403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.624650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.624692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.624885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.625195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.625218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.625481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.625707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.625750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.626020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.626273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.626298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.626557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.626771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.626814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.627041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.627213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.627237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.627457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.627683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.627733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.628042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.628313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.628337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.628557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.628726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.628768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.629013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.629183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.629207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.629416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.629648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.629689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.629951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.630189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.630214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.630431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.630654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.630696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.630891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.631073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.631098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.631343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.631555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.631596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.631798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.632013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.632035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.632271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.632456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.632498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.632725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.632951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.632994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.633200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.633413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.633454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.633681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.633942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.633984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.634194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.634391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.634440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.634692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.634984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.635026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.635199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.635402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.635455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.635676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.635822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.635864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.636047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.636242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.636266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.636447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.636731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.636774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.636981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.637236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.637261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.637544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.637817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.637859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.761 [2024-04-15 18:18:45.638077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.638263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.761 [2024-04-15 18:18:45.638286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.761 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.638431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.638604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.638645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.638919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.639160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.639185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.639366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.639579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.639620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.639860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.640083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.640118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.640355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.640487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.640529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.640769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.641055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.641086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.641334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.641628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.641671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.641874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.642051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.642080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.642310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.642497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.642538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.642773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.642965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.642987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.643200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.643462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.643504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.643726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.643960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.644003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.644169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.644338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.644382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.644590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.644849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.644901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.645186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.645374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.645417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.645587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.645849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.645892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.646078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.646327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.646369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.646610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.646791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.646834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.646973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.647145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.647169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.647368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.647641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.647683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.647895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.648082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.648135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.648335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.648536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.648579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.648834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.649063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.649101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.649296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.649534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.649575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.649776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.649993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.650015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.650323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.650546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.650589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.650768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.651069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.651093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.651282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.651514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.651555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.762 [2024-04-15 18:18:45.651794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.652028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.762 [2024-04-15 18:18:45.652051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.762 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.652279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.652453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.652494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.652688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.652877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.652918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.653100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.653313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.653354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.653549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.653772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.653814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.654047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.654269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.654292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.654582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.654867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.654909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.655095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.655321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.655345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.655567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.655864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.655907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.656127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.656347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.656389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.656604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.656891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.656934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.657114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.657422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.657465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.657720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.657953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.657976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.658213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.658471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.658514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.658740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.658983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.659006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.659235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.659410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.659452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.659588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.659788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.659816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.659990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.660225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.660250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.660538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.660721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.660763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.661033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.661265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.661290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.661508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.661698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.661740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.661896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.662126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.662156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.662360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.662655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.662697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.662972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.663229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.663255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.663480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.663681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.663723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.663910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.664109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.664133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.664373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.664549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.664591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.664841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.665055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.665098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.665279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.665438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.665480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.763 [2024-04-15 18:18:45.665660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.665892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.763 [2024-04-15 18:18:45.665933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.763 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.666109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.666411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.666460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.666713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.666928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.666951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.667146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.667373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.667416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.667666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.667920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.667962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.668139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.668401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.668442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.668581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.668827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.668868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.669072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.669269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.669312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.669520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.669774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.669815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.669990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.670190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.670214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.670484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.670709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.670751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.670908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.671120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.671148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.671380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.671718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.671760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.671944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.672168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.672192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.672445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.672719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.672761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.673047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.673354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.673391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.673540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.673732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.673781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.673941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.674199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.674225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.674541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.674829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.674869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.675087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.675248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.675272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.675560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.675847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.675890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.676122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.676340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.676366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.676695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.676983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.677026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.677309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.677586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.677629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.677793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.677991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.678014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.678260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.678522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.678565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.764 qpair failed and we were unable to recover it. 00:31:56.764 [2024-04-15 18:18:45.678811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.764 [2024-04-15 18:18:45.679010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.765 [2024-04-15 18:18:45.679034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.765 qpair failed and we were unable to recover it. 00:31:56.765 [2024-04-15 18:18:45.679174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.765 [2024-04-15 18:18:45.679388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.765 [2024-04-15 18:18:45.679432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.765 qpair failed and we were unable to recover it. 00:31:56.765 [2024-04-15 18:18:45.679630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.765 [2024-04-15 18:18:45.679849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.765 [2024-04-15 18:18:45.679890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.765 qpair failed and we were unable to recover it. 00:31:56.765 [2024-04-15 18:18:45.680128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.765 [2024-04-15 18:18:45.680395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.765 [2024-04-15 18:18:45.680419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.765 qpair failed and we were unable to recover it. 00:31:56.765 [2024-04-15 18:18:45.680695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.765 [2024-04-15 18:18:45.680900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.765 [2024-04-15 18:18:45.680942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.765 qpair failed and we were unable to recover it. 00:31:56.765 [2024-04-15 18:18:45.681139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.765 [2024-04-15 18:18:45.681328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.765 [2024-04-15 18:18:45.681376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.765 qpair failed and we were unable to recover it. 00:31:56.765 [2024-04-15 18:18:45.681622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.765 [2024-04-15 18:18:45.681806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.765 [2024-04-15 18:18:45.681848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.765 qpair failed and we were unable to recover it. 00:31:56.765 [2024-04-15 18:18:45.682028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.765 [2024-04-15 18:18:45.682245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.765 [2024-04-15 18:18:45.682270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.765 qpair failed and we were unable to recover it. 00:31:56.765 [2024-04-15 18:18:45.682473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.765 [2024-04-15 18:18:45.682640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.765 [2024-04-15 18:18:45.682669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:56.765 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.682847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.683076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.683101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.683270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.683507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.683550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.683760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.683980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.684018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.684231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.684379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.684420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.684643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.684840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.684883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.685098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.685305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.685359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.685605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.685851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.685893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.686070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.686246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.686270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.686483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.686695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.686744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.686894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.687233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.687257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.687445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.687661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.687703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.687899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.688136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.688178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.688371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.688612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.688654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.688814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.689038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.689069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.689240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.689437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.689483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.689640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.689942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.689985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.690294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.690547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.690590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.690818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.691047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.691078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.691346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.691638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.691680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.691886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.692219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.692263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.039 qpair failed and we were unable to recover it. 00:31:57.039 [2024-04-15 18:18:45.692538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.692765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.039 [2024-04-15 18:18:45.692807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.692990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.693273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.693298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.693576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.693777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.693820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.694042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.694325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.694350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.694638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.694877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.694919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.695148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.695383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.695426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.695626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.695786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.695828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.696072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.696325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.696363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.696561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.696732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.696774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.696980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.697162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.697186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.697375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.697589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.697631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.697825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.698015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.698038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.698251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.698393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.698436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.698656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.698841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.698882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.699099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.699305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.699348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.699551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.699712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.699754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.699939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.700110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.700139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.700367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.700533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.700575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.700733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.700904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.700942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.701123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.701301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.701344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.701539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.701687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.701725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.701932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.702101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.702125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.702331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.702517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.702558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.702755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.702944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.702967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.703127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.703308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.703350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.703558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.703800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.703841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.704017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.704232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.040 [2024-04-15 18:18:45.704276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.040 qpair failed and we were unable to recover it. 00:31:57.040 [2024-04-15 18:18:45.704478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.704705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.704747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.704920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.705123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.705165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.705403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.705569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.705611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.705783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.705986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.706009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.706252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.706476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.706518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.706721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.706895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.706918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.707210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.707476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.707518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.707780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.708041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.708085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.708249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.708528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.708570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.708870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.709134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.709158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.709321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.709488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.709530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.709700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.709959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.710001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.710194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.710414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.710456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.710758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.710945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.710968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.711191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.711349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.711392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.711564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.711861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.711902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.712131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.712324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.712365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.712636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.712830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.712873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.713142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.713532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.713569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.713867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.714085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.714121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.714382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.714618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.714670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.715013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.715304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.715330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.715614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.715886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.715937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.716185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.716390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.716437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.716639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.716886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.716936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.717135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.717392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.717445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.041 qpair failed and we were unable to recover it. 00:31:57.041 [2024-04-15 18:18:45.717701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.041 [2024-04-15 18:18:45.717952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.717996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.718252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.718476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.718526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.718708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.718928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.718950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.719203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.719502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.719551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.719851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.720034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.720065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.720223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.720403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.720466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.720652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.720877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.720931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.721118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.721372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.721435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.721659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.721879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.721923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.722120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.722355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.722396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.722561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.722828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.722881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.723138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.723416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.723468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.723720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.723949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.724000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.724221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.724429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.724486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.724679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.724958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.725008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.725416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.725715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.725766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.725996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.726204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.726239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.726455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.726645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.726696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.726932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.727103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.727128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.727356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.727546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.727593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.727924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.728248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.728291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.728567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.728805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.728856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.729083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.729324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.729347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.729575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.729834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.729885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.730127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.730312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.730336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.730610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.730861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.730911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.042 qpair failed and we were unable to recover it. 00:31:57.042 [2024-04-15 18:18:45.731205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.042 [2024-04-15 18:18:45.731445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.731486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.731768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.732026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.732076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.732308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.732479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.732531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.732800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.733085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.733110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.733351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.733592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.733644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.733885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.734121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.734145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.734335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.734655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.734705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.734943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.735113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.735138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.735309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.735521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.735608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.735878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.736155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.736179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.736379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.736592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.736643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.736846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.737027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.737050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.737283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.737478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.737526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.737825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.738068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.738108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.738347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.738558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.738612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.738838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.739025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.739081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.739363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.739654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.739701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.739937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.740134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.740158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.740356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.740576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.740629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.740825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.741108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.741132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.741388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.741550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.741600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.741796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.741998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.742021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.742254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.742435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.742496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.742811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.743129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.743154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.743479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.743775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.743823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.744107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.744281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.744304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.744630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.744854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.744905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.745149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.745321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.745372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.043 qpair failed and we were unable to recover it. 00:31:57.043 [2024-04-15 18:18:45.745694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.043 [2024-04-15 18:18:45.745969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.746022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.746271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.746487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.746536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.746750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.747027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.747070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.747525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.747811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.747860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.748096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.748282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.748305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.748529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.748789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.748838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.749080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.749256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.749279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.749529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.749865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.749914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.750203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.750390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.750451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.750669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.750871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.750920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.751143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.751391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.751453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.751708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.751919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.751952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.752203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.752544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.752593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.752864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.753071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.753095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.753376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.753595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.753645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.753873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.754037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.754091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.754366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.754613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.754661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.754888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.755179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.755203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.755400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.755664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.755711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.755956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.756264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.756288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.756516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.756725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.756778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.756950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.757085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.757109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.757317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.757514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.044 [2024-04-15 18:18:45.757573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.044 qpair failed and we were unable to recover it. 00:31:57.044 [2024-04-15 18:18:45.757900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.758142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.758165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.758391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.758648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.758696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.758873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.759130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.759155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.759443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.759662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.759714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.759936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.760130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.760154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.760448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.760660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.760713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.760992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.761214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.761238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.761473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.761744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.761794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.762034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.762210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.762235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.762451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.762669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.762721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.762903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.763141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.763167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.763492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.763787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.763835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.764155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.764448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.764498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.764779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.764970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.764993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.765287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.765558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.765609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.765879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.766201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.766226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.766473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.766717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.766766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.767107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.767414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.767438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.767758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.768051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.768112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.768296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.768463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.768506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.768660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.768893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.768934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.769154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.769328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.769381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.769690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.769960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.770011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.770272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.770545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.770594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.770865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.771114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.771138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.771301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.771481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.771535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.771766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.771914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.771936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.772113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.772306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.772350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.045 qpair failed and we were unable to recover it. 00:31:57.045 [2024-04-15 18:18:45.772539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.045 [2024-04-15 18:18:45.772717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.772760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.772922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.773075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.773118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.773318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.773527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.773569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.773759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.773918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.773958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.774120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.774306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.774348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.774559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.774744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.774788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.774973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.775165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.775209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.775384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.775565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.775607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.775795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.775981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.776004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.776228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.776433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.776486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.776674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.776860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.776883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.777079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.777263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.777305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.777513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.777709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.777751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.777942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.778117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.778147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.778374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.778552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.778594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.778751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.778968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.778998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.779212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.779419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.779464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.779646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.779860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.779903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.780106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.780310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.780353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.780537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.780706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.780748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.780926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.781084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.781127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.781290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.781509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.781551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.781733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.781923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.781946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.782070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.782213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.782255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.782462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.782626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.782669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.782828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.782949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.782987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.783196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.783361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.783404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.783603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.783813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.783835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.046 [2024-04-15 18:18:45.784013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.784157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.046 [2024-04-15 18:18:45.784188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.046 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.784376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.784592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.784635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.784840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.785034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.785071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.785267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.785473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.785515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.785691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.785890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.785932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.786159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.786343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.786388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.786573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.786824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.786872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.787069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.787203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.787229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.787399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.787585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.787630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.787831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.788052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.788108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.788294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.788497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.788539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.788688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.788851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.788894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.789080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.789280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.789305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.789472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.789666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.789709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.789891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.790050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.790094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.790246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.790407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.790449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.790609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.790780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.790809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.790986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.791173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.791198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.791425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.791698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.791741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.791925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.792117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.792141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.792314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.792482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.792527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.792692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.792874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.792899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.793065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.793194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.793218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.793412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.793553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.793596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.793789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.793938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.793975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.794139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.794289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.794332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.794482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.794672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.794710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.794881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.795081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.795106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.795245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.795464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.795507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.047 qpair failed and we were unable to recover it. 00:31:57.047 [2024-04-15 18:18:45.795666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.795869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.047 [2024-04-15 18:18:45.795911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.796097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.796284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.796328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.796549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.796729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.796759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.796929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.797111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.797142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.797299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.797534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.797576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.797766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.797953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.797976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.798159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.798310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.798356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.798544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.798726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.798756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.798964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.799143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.799187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.799389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.799537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.799580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.799754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.799945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.799971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.800118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.800297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.800338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.800546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.800726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.800767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.800924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.801107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.801132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.801324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.801488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.801530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.801689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.801872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.801911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.802102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.802293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.802334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.802492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.802634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.802677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.802863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.803025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.803073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.803296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.803489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.803532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.803729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.803879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.803903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.804086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.804283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.804326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.804523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.804753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.804795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.804974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.805158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.805201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.805388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.805566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.805608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.048 qpair failed and we were unable to recover it. 00:31:57.048 [2024-04-15 18:18:45.805782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.048 [2024-04-15 18:18:45.805975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.805999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.806167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.806311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.806354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.806566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.806750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.806793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.807008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.807207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.807251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.807404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.807550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.807590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.807731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.807879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.807902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.808098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.808291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.808341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.808502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.808667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.808710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.808844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.808980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.809004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.809177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.809363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.809406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.809567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.809778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.809821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.809961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.810129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.810174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.810363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.810530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.810572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.810780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.810969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.810992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.811202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.811365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.811408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.811589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.811778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.811820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.811975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.812140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.812183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.812380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.812561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.812602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.812768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.812944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.812987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.813158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.813326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.813366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.813546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.813778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.813820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.813984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.814167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.814209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.814359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.814552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.814595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.814750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.814927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.814964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.815130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.815367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.815412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.815592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.815788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.815831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.816012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.816180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.816225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.816421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.816601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.816644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.816812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.817017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.817045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.049 qpair failed and we were unable to recover it. 00:31:57.049 [2024-04-15 18:18:45.817258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.049 [2024-04-15 18:18:45.817464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.817508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.817668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.817848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.817890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.818056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.818215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.818257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.818480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.818627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.818669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.818819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.818992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.819016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.819249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.819427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.819469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.819665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.819857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.819899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.820082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.820226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.820255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.820484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.820667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.820710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.820902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.821077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.821120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.821332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.821542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.821585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.821776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.821979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.822003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.822189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.822365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.822406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.822545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.822733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.822763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.822933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.823093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.823137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.823342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.823490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.823541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.823740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.823929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.823952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.824127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.824290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.824340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.824539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.824695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.824738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.824904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.825107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.825135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.825315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.825489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.825519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.825742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.825927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.825951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.826130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.826309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.826351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.826526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.826705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.826747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.826925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.827123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.827167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.827370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.827577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.827618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.827787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.827950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.827973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.828165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.828325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.828369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.828529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.828705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.050 [2024-04-15 18:18:45.828737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.050 qpair failed and we were unable to recover it. 00:31:57.050 [2024-04-15 18:18:45.828945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.829124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.829173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.829367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.829543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.829566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.829736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.829923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.829947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.830121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.830314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.830356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.830525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.830713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.830753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.830961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.831146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.831190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.831332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.831528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.831570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.831717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.831898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.831923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.832091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.832237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.832279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.832475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.832703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.832745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.832913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.833110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.833136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.833319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.833510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.833555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.833742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.833966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.833989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.834174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.834357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.834400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.834543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.834753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.834794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.834957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.835131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.835164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.835342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.835514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.835538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.835723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.835899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.835922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.836110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.836295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.836337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.836476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.836659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.836688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.836861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.837069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.837093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.837281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.837505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.837548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.837738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.837925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.837949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.838134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.838330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.838374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.838574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.838764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.838806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.838977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.839158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.839188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.839368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.839543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.839573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.839754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.839969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.839992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.840190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.840377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.051 [2024-04-15 18:18:45.840419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.051 qpair failed and we were unable to recover it. 00:31:57.051 [2024-04-15 18:18:45.840609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.840781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.840810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.840960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.841181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.841224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.841461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.841673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.841716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.841891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.842085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.842109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.842276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.842438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.842480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.842683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.842835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.842877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.843025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.843220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.843246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.843431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.843573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.843616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.843783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.843983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.844006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.844230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.844465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.844507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.844675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.844850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.844892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.845093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.845233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.845276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.845466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.845690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.845732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.845905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.846078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.846103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.846307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.846513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.846555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.846767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.846980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.847010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.847186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.847362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.847404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.847572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.847798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.847840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.847987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.848139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.848180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.848402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.848582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.848625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.848812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.849000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.849023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.849196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.849408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.849450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.849654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.849815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.849861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.850001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.850141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.850185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.850342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.850525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.850566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.850708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.850881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.850904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.851096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.851279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.851322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.851478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.851670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.851712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.851843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.852001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.052 [2024-04-15 18:18:45.852038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.052 qpair failed and we were unable to recover it. 00:31:57.052 [2024-04-15 18:18:45.852230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.852454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.852497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.852633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.852784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.852807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.852967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.853148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.853173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.853351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.853522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.853546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.853718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.853867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.853904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.854083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.854228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.854270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.854418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.854578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.854622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.854811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.854972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.854996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.855130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.855290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.855337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.855518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.855707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.855750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.855890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.856046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.856091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.856277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.856443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.856486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.856645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.856825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.856866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.857047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.857194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.857238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.857417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.857637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.857678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.857831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.858013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.858037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.858203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.858392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.858435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.858577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.858804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.858847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.859031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.859235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.859260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.859440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.859667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.859709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.859854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.860040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.860089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.860267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.860422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.860451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.860638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.860814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.860855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.861023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.861232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.861275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.053 qpair failed and we were unable to recover it. 00:31:57.053 [2024-04-15 18:18:45.861450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.861634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.053 [2024-04-15 18:18:45.861677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.861813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.861993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.862031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.862221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.862402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.862445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.862608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.862821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.862863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.863069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.863265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.863308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.863536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.863718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.863747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.863950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.864113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.864138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.864344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.864530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.864571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.864730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.864909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.864948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.865139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.865347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.865390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.865594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.865773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.865815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.865995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.866119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.866163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.866320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.866483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.866534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.866680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.866838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.866862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.866985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.867182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.867224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.867372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.867553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.867596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.867759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.867934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.867972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.868120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.868349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.868392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.868529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.868736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.868777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.868949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.869124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.869166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.869341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.869560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.869602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.869794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.869984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.870007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.870242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.870400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.870443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.870639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.870855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.870900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.871072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.871262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.871305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.871473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.871691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.871734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.871932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.872090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.872130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.872274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.872497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.872539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.872681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.872857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.872884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.054 qpair failed and we were unable to recover it. 00:31:57.054 [2024-04-15 18:18:45.873086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.873250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.054 [2024-04-15 18:18:45.873293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.873464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.873656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.873699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.873857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.874025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.874048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.874211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.874404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.874433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.874608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.874769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.874814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.874973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.875133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.875166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.875328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.875477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.875520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.875688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.875839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.875863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.876041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.876248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.876288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.876453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.876633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.876675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.876843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.876986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.877015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.877237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.877435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.877477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.877654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.877834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.877876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.878091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.878211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.878240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.878413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.878592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.878623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.878805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.879011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.879033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.879200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.879410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.879452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.879634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.879820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.879863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.879995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.880172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.880216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.880377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.880517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.880559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.880745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.880915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.880955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.881113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.881307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.881336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.881502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.881659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.881689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.881890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.882050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.882081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.882209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.882443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.882486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.882679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.882904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.882947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.883085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.883240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.883283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.883501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.883647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.883690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.883848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.883990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.884014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.055 qpair failed and we were unable to recover it. 00:31:57.055 [2024-04-15 18:18:45.884186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.055 [2024-04-15 18:18:45.884367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.884404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.884561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.884761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.884808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.884990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.885123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.885178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.885343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.885528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.885570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.885719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.885893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.885915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.886132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.886335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.886378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.886536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.886728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.886769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.886901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.887072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.887097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.887268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.887455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.887497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.887654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.887836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.887879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.888035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.888188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.888231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.888405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.888609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.888661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.888863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.889070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.889109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.889271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.889436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.889478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.889682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.889880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.889922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.890109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.890336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.890380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.890532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.890713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.890755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.890949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.891122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.891166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.891377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.891583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.891635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.891829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.892022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.892045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.892239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.892386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.892428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.892619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.892810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.892854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.893023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.893195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.893238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.893419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.893587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.893629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.893826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.893994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.894017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.894161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.894390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.894432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.894630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.894792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.894834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.894999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.895186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.895216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.895423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.895605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.895648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.056 qpair failed and we were unable to recover it. 00:31:57.056 [2024-04-15 18:18:45.895810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.056 [2024-04-15 18:18:45.895962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.895986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.896163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.896329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.896371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.896511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.896691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.896733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.896903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.897090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.897128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.897286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.897469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.897520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.897679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.897822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.897846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.898045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.898229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.898270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.898452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.898642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.898686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.898874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.899080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.899105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.899284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.899462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.899505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.899677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.899852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.899894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.900041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.900230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.900256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.900466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.900618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.900660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.900847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.901040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.901068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.901235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.901396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.901439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.901594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.901809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.901852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.902035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.902200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.902241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.902427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.902604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.902648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.902823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.903037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.903084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.903245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.903471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.903512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.903669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.903835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.903864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.904037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.904232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.904276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.904417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.904598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.904640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.904809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.904984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.905007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.905172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.905401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.057 [2024-04-15 18:18:45.905443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.057 qpair failed and we were unable to recover it. 00:31:57.057 [2024-04-15 18:18:45.905647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.905818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.905860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.906029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.906193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.906236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.906407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.906548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.906595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.906739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.906883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.906907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.907118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.907267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.907291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.907504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.907677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.907701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.907886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.908076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.908100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.908321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.908517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.908559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.908704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.908885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.908908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.909091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.909264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.909308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.909475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.909642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.909686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.909848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.910012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.910036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.910215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.910395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.910424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.910608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.910753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.910801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.910967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.911147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.911190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.911337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.915228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.915276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.915439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.915641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.915671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.915810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.916018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.916046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.916263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.916460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.916489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.916652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.916789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.916819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.916984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.917162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.917195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.917388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.917583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.917611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.917782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.917997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.918026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.918214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.918361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.918390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.918583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.918723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.918753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.918920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.919104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.919135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.919354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.919524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.919553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.919752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.919890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.919919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.920131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.920312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.920342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.058 qpair failed and we were unable to recover it. 00:31:57.058 [2024-04-15 18:18:45.920513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.058 [2024-04-15 18:18:45.920684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.920713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.920847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.921020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.921049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.921228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.921418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.921447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.921598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.921810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.921840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.921993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.922171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.922203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.922375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.922546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.922574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.922744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.922925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.922955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.923129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.923285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.923315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.923484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.923658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.923687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.923830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.924022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.924052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.924242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.924449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.924478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.924676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.924818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.924847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.925027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.925189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.925219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.925389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.925541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.925570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.925743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.925889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.925917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.926090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.926225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.926255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.926434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.926621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.926654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.926842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.926980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.927010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.927185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.927372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.927401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.927577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.927752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.927782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.927955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.928102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.928132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.928330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.928526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.928555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.928747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.928906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.928936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.929125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.929287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.929316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.929500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.929660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.929689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.929866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.930049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.930087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.930270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.930461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.930490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.930704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.930898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.930926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.931093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.931281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.931311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.059 qpair failed and we were unable to recover it. 00:31:57.059 [2024-04-15 18:18:45.931483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.059 [2024-04-15 18:18:45.931677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.931707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.931905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.932075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.932105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.932279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.932456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.932486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.932683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.932873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.932904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.933072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.933267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.933296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.933504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.933672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.933701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.933899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.934045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.934081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.934380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.934581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.934609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.934787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.934956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.934985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.935168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.935342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.935371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.935555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.935756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.935786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.935956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.936156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.936187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.936325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.936498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.936527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.936724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.936905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.936935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.937106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.937284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.937314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.937492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.937639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.937668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.937835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.937977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.938005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.938180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.938338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.938368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.938532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.938695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.938726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.938901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.939106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.939136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.939286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.939447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.939481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.939651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.939860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.939889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.940055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.940246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.940275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.940420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.940627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.940657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.940833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.941012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.941042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.941237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.941409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.941438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.941606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.941766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.941807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.941980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.942165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.942195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.942376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.942553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.060 [2024-04-15 18:18:45.942583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.060 qpair failed and we were unable to recover it. 00:31:57.060 [2024-04-15 18:18:45.942737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.942917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.942949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.943126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.943291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.943325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.943532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.943669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.943700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.943858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.944040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.944084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.944291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.944465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.944495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.944658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.944829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.944858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.945051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.945243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.945272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.945485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.945623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.945652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.945792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.945925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.945955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.946133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.946308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.946337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.946502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.946719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.946748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.946909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.947117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.947152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.947338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.947547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.947579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.947718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.947918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.947947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.948158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.948351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.948382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.948597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.948735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.948764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.948927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.949080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.949110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.949283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.949422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.949451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.949593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.949773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.949802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.949989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.950152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.950182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.950357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.950544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.950573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.950726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.950897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.950931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.951074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.951236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.951265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.951439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.951614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.951643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.951829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.952028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.952056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.952278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.952477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.952506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.952702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.952846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.952875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.953074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.953221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.953250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.953396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.953594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.061 [2024-04-15 18:18:45.953624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.061 qpair failed and we were unable to recover it. 00:31:57.061 [2024-04-15 18:18:45.953788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.953955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.953984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.954168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.954381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.954411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.954578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.954774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.954804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.955007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.955178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.955208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.955349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.955538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.955568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.955736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.955929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.955958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.956106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.956312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.956341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.956556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.956697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.956726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.956876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.957043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.957081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.957234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.957430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.957460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.957634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.957830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.957859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.958088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.958226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.958256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.958432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.958564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.958593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.958735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.958871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.958900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.959077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.959224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.959253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.959460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.959602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.959630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.959829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.959998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.960028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.960237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.960404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.960433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.960590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.960801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.960831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.961005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.961172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.961205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.961391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.961558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.961588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.961772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.961937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.961966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.962160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.962325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.962354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.962530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.962692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.962721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.962862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.963029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.062 [2024-04-15 18:18:45.963065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.062 qpair failed and we were unable to recover it. 00:31:57.062 [2024-04-15 18:18:45.963236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.963403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.963433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.963615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.963781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.963810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.963954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.964120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.964150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.964320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.964486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.964516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.964690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.964871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.964903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.965119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.965302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.965332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.965471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.965613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.965642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.965811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.965994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.966023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.966238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.966379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.966409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.966587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.966755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.966785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.966954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.967136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.967166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.967361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.967500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.967529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.967725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.967899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.967928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.968095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.968242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.968273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.968469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.968675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.968704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.968901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.969090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.969120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.969263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.969441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.969470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.969630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.969805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.969837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.969980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.970109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.970139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.970319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.970526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.970556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.970743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.970948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.970977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.971151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.971321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.971350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.971516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.971653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.971682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.971863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.972008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.972037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.972226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.972396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.972426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.972595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.972800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.972829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.973025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.973250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.973279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.973475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.973628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.973658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.973856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.973998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.063 [2024-04-15 18:18:45.974027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.063 qpair failed and we were unable to recover it. 00:31:57.063 [2024-04-15 18:18:45.974180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.064 [2024-04-15 18:18:45.974309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.064 [2024-04-15 18:18:45.974338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.064 qpair failed and we were unable to recover it. 00:31:57.064 [2024-04-15 18:18:45.974535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.064 [2024-04-15 18:18:45.974681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.064 [2024-04-15 18:18:45.974709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.064 qpair failed and we were unable to recover it. 00:31:57.064 [2024-04-15 18:18:45.974877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.064 [2024-04-15 18:18:45.975010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.064 [2024-04-15 18:18:45.975042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.064 qpair failed and we were unable to recover it. 00:31:57.064 [2024-04-15 18:18:45.975224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.064 [2024-04-15 18:18:45.975387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.064 [2024-04-15 18:18:45.975419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.064 qpair failed and we were unable to recover it. 00:31:57.064 [2024-04-15 18:18:45.975614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.064 [2024-04-15 18:18:45.975804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.064 [2024-04-15 18:18:45.975873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.064 qpair failed and we were unable to recover it. 00:31:57.064 [2024-04-15 18:18:45.976107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.064 [2024-04-15 18:18:45.976283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.064 [2024-04-15 18:18:45.976321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.064 qpair failed and we were unable to recover it. 00:31:57.064 [2024-04-15 18:18:45.976541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.064 [2024-04-15 18:18:45.976701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.064 [2024-04-15 18:18:45.976744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.064 qpair failed and we were unable to recover it. 00:31:57.064 [2024-04-15 18:18:45.976920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.064 [2024-04-15 18:18:45.977105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.064 [2024-04-15 18:18:45.977135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.064 qpair failed and we were unable to recover it. 00:31:57.064 [2024-04-15 18:18:45.977273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.064 [2024-04-15 18:18:45.977439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.064 [2024-04-15 18:18:45.977468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.064 qpair failed and we were unable to recover it. 00:31:57.064 [2024-04-15 18:18:45.977629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.977789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.977821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.338 qpair failed and we were unable to recover it. 00:31:57.338 [2024-04-15 18:18:45.978034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.978188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.978217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.338 qpair failed and we were unable to recover it. 00:31:57.338 [2024-04-15 18:18:45.978387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.978552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.978581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.338 qpair failed and we were unable to recover it. 00:31:57.338 [2024-04-15 18:18:45.978750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.978918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.978947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.338 qpair failed and we were unable to recover it. 00:31:57.338 [2024-04-15 18:18:45.979125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.979281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.979311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.338 qpair failed and we were unable to recover it. 00:31:57.338 [2024-04-15 18:18:45.979491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.979633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.979663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.338 qpair failed and we were unable to recover it. 00:31:57.338 [2024-04-15 18:18:45.979875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.980054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.980112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.338 qpair failed and we were unable to recover it. 00:31:57.338 [2024-04-15 18:18:45.980261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.980425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.980454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.338 qpair failed and we were unable to recover it. 00:31:57.338 [2024-04-15 18:18:45.980618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.980788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.980820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.338 qpair failed and we were unable to recover it. 00:31:57.338 [2024-04-15 18:18:45.980961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.338 [2024-04-15 18:18:45.981159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.981189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.981331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.981526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.981555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.981723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.981891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.981920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.982080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.982220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.982253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.982423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.982584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.982616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.982844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.983067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.983097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.983234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.983369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.983398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.983568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.983731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.983760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.983956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.984101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.984131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.984305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.984469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.984498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.984687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.984823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.984852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.985048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.985243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.985273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.985446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.985638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.985668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.985857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.986074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.986108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.986318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.986514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.986543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.986765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.986955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.986983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.987120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.987298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.987328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.987521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.987662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.987692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.987829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.987999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.988028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.988219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.988386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.988416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.988588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.988774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.339 [2024-04-15 18:18:45.988803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.339 qpair failed and we were unable to recover it. 00:31:57.339 [2024-04-15 18:18:45.988987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.989180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.989210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.989414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.989595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.989624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.989780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.989972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.990001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.990187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.990354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.990386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.990537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.990677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.990706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.990873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.991036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.991075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.991258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.991396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.991425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.991592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.991745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.991774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.991943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.992087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.992118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.992262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.992450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.992480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.992685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.992905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.992935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.993126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.993323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.993352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.993536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.993756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.993796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.993958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.994126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.994157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.994323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.994484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.994513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.994705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.994900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.994930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.995102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.995270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.995302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.995487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.995625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.995657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.995837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.996011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.996040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.996249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.996420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.996449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.340 qpair failed and we were unable to recover it. 00:31:57.340 [2024-04-15 18:18:45.996611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.996782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.340 [2024-04-15 18:18:45.996823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:45.996972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:45.997148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:45.997179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:45.997371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:45.997555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:45.997584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:45.997757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:45.997944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:45.997974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:45.998130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:45.998276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:45.998305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:45.998472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:45.998660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:45.998688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:45.998899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:45.999115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:45.999145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:45.999324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:45.999513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:45.999544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:45.999739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:45.999944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:45.999973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:46.000138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.000326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.000356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:46.000521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.000715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.000748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:46.000909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.001102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.001132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:46.001298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.001466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.001495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:46.001687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.001876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.001905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:46.002131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.002261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.002290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:46.002478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.002629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.002661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:46.002840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.003010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.003039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:46.003210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.003375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.003404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:46.003536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.003719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.003748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:46.003919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.004064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.004093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.341 [2024-04-15 18:18:46.004260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.004426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.341 [2024-04-15 18:18:46.004460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.341 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.004641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.004781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.004810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.004999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.005169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.005199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.005338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.005500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.005529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.005709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.005898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.005927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.006105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.006271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.006301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.006467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.006607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.006636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.006809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.006972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.007001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.007171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.007339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.007372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.007562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.007736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.007766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.007959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.008128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.008163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.008359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.008549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.008578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.008759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.008958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.008987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.009199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.009345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.009374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.009565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.009727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.009756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.009923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.010089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.010123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.010270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.010466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.010496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.010669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.010808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.010837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.011028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.011219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.011248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.011383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.011579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.011608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.011807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.011969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.012003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.342 qpair failed and we were unable to recover it. 00:31:57.342 [2024-04-15 18:18:46.012192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.342 [2024-04-15 18:18:46.012397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.012426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.012649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.012803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.012832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.013023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.013217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.013248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.013412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.013588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.013618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.013781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.013953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.013983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.014156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.014351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.014380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.014546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.014750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.014779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.014999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.015181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.015211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.015386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.015598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.015627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.015810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.015989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.016019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.016215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.016439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.016469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.016681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.016826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.016856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.017025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.017193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.017223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.017387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.017579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.017608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.017814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.017997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.018026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.018210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.018378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.018408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.018610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.018828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.018864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.019071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.019218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.019247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.019452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.019622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.019651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.019826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.020011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.020040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.343 [2024-04-15 18:18:46.020222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.020358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.343 [2024-04-15 18:18:46.020388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.343 qpair failed and we were unable to recover it. 00:31:57.344 [2024-04-15 18:18:46.020551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.020724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.020752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.344 qpair failed and we were unable to recover it. 00:31:57.344 [2024-04-15 18:18:46.020918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.021110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.021141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.344 qpair failed and we were unable to recover it. 00:31:57.344 [2024-04-15 18:18:46.021283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.021526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.021556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.344 qpair failed and we were unable to recover it. 00:31:57.344 [2024-04-15 18:18:46.021742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.021895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.021925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.344 qpair failed and we were unable to recover it. 00:31:57.344 [2024-04-15 18:18:46.022074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.022230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.022259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.344 qpair failed and we were unable to recover it. 00:31:57.344 [2024-04-15 18:18:46.022457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.022674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.022703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.344 qpair failed and we were unable to recover it. 00:31:57.344 [2024-04-15 18:18:46.022891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.023135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.023165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.344 qpair failed and we were unable to recover it. 00:31:57.344 [2024-04-15 18:18:46.023345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.023505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.023536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.344 qpair failed and we were unable to recover it. 00:31:57.344 [2024-04-15 18:18:46.023704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.023875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.023905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.344 qpair failed and we were unable to recover it. 00:31:57.344 [2024-04-15 18:18:46.024110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.024276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.024305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.344 qpair failed and we were unable to recover it. 00:31:57.344 [2024-04-15 18:18:46.024472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.024665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.024694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.344 qpair failed and we were unable to recover it. 00:31:57.344 [2024-04-15 18:18:46.024887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.025074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.025114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.344 qpair failed and we were unable to recover it. 00:31:57.344 [2024-04-15 18:18:46.025265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.025433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.025462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.344 qpair failed and we were unable to recover it. 00:31:57.344 [2024-04-15 18:18:46.025608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.025807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.025836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.344 qpair failed and we were unable to recover it. 00:31:57.344 [2024-04-15 18:18:46.025998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.026167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.026198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.344 qpair failed and we were unable to recover it. 00:31:57.344 [2024-04-15 18:18:46.026332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.026526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.344 [2024-04-15 18:18:46.026556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.026745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.026968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.026998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.027181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.027350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.027379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.027581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.027834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.027864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.028112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.028261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.028290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.028453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.028644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.028673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.028810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.029130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.029160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.029308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.029486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.029515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.029719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.029962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.029991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.030239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.030480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.030520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.030775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.031028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.031057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.031244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.031490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.031519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.031731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.031915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.031945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.032138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.032353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.032382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.032665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.032942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.032971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.033161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.033352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.033382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.033640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.033893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.033922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.034122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.034284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.034322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.034532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.034722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.034763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.035048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.035230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.035259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.345 [2024-04-15 18:18:46.035516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.035742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.345 [2024-04-15 18:18:46.035771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.345 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.036056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.036276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.036311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.036481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.036720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.036749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.037009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.037233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.037262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.037498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.037769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.037798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.038082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.038263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.038293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.038548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.038801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.038830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.039023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.039197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.039226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.039500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.039694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.039723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.039890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.040085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.040115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.040253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.040439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.040468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.040747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.041045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.041082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.041285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.041451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.041480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.041664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.041884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.041925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.042182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.042391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.042430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.042676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.042885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.042923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.043163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.043372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.043401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.043631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.043894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.043924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.044155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.044345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.044374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.044616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.044882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.044912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.045163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.045336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.045365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.346 qpair failed and we were unable to recover it. 00:31:57.346 [2024-04-15 18:18:46.045618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.346 [2024-04-15 18:18:46.045891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.045920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.046118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.046304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.046333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.046590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.046772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.046800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.047075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.047241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.047270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.047441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.047626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.047655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.047836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.048025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.048054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.048250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.048510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.048539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.048835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.049114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.049144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.049322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.049542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.049571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.049765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.049934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.049963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.050150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.050321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.050350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.050618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.050857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.050885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.051097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.051389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.051419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.051712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.051997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.052026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.052224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.052418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.052447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.052646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.052844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.052873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.053092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.053297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.053326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.053655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.053814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.053844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.054142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.054364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.054394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.054538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.054744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.054774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.347 [2024-04-15 18:18:46.054949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.055188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.347 [2024-04-15 18:18:46.055218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.347 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.055380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.055552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.055592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.055846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.056093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.056123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.056342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.056524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.056554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.056762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.056983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.057012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.057228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.057478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.057509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.057790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.057994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.058023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.058176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.058388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.058418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.058681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.058857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.058887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.059105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.059330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.059360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.059642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.059811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.059840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.060043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.060221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.060251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.060413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.060631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.060660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.060846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.061112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.061143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.061421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.061597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.061626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.061802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.062013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.062042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.062315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.062511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.062540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.062707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.062874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.062903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.063185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.063446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.063476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.063751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.063951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.063980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.064228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.064458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.064488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.348 qpair failed and we were unable to recover it. 00:31:57.348 [2024-04-15 18:18:46.064801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.348 [2024-04-15 18:18:46.065123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.065153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.065402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.065606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.065635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.065893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.066090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.066121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.066346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.066527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.066556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.066797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.067057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.067095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.067463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.067719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.067770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.068121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.068421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.068467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.068669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.068908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.068957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.069182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.069449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.069500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.069824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.070018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.070047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.070366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.070664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.070719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.070865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.071044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.071083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.071309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.071542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.071600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.071935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.072340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.072386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.072669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.072916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.072968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.073308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.073614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.073668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.073885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.074169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.074199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.074464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.074717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.074765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.074958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.075138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.075168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.075355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.075591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.075643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.075878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.076237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.349 [2024-04-15 18:18:46.076284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.349 qpair failed and we were unable to recover it. 00:31:57.349 [2024-04-15 18:18:46.076520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.076747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.076799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.076994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.077218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.077256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.077587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.077913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.077961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.078241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.078532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.078580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.078758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.078965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.079005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.079296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.079505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.079558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.079736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.079929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.079958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.080215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.080458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.080509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.080753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.080947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.080976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.081292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.081570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.081620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.081931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.082112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.082142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.082436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.082732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.082789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.082996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.083253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.083299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.083657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.083933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.083983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.084291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.084617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.084672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.084959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.085274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.085320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.085605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.085866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.085917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.086131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.086367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.086417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.086749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.087022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.087051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.087317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.087565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.087617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.350 [2024-04-15 18:18:46.087802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.088013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.350 [2024-04-15 18:18:46.088042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.350 qpair failed and we were unable to recover it. 00:31:57.351 [2024-04-15 18:18:46.088335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.088583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.088634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.351 qpair failed and we were unable to recover it. 00:31:57.351 [2024-04-15 18:18:46.088924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.089133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.089164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.351 qpair failed and we were unable to recover it. 00:31:57.351 [2024-04-15 18:18:46.089388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.089665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.089714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.351 qpair failed and we were unable to recover it. 00:31:57.351 [2024-04-15 18:18:46.089928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.090159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.090189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.351 qpair failed and we were unable to recover it. 00:31:57.351 [2024-04-15 18:18:46.090475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.090744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.090792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.351 qpair failed and we were unable to recover it. 00:31:57.351 [2024-04-15 18:18:46.091079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.091295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.091354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.351 qpair failed and we were unable to recover it. 00:31:57.351 [2024-04-15 18:18:46.091578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.091766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.091824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.351 qpair failed and we were unable to recover it. 00:31:57.351 [2024-04-15 18:18:46.092066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.092286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.092316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.351 qpair failed and we were unable to recover it. 00:31:57.351 [2024-04-15 18:18:46.092561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.092794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.092845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.351 qpair failed and we were unable to recover it. 00:31:57.351 [2024-04-15 18:18:46.093082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.093274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.093304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.351 qpair failed and we were unable to recover it. 00:31:57.351 [2024-04-15 18:18:46.093530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.093777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.093826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.351 qpair failed and we were unable to recover it. 00:31:57.351 [2024-04-15 18:18:46.094105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.094309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.094340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.351 qpair failed and we were unable to recover it. 00:31:57.351 [2024-04-15 18:18:46.094611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.094943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.094990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.351 qpair failed and we were unable to recover it. 00:31:57.351 [2024-04-15 18:18:46.095271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.095521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.095572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.351 qpair failed and we were unable to recover it. 00:31:57.351 [2024-04-15 18:18:46.095794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.095982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.096023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.351 qpair failed and we were unable to recover it. 00:31:57.351 [2024-04-15 18:18:46.096365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.096627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.351 [2024-04-15 18:18:46.096679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.351 qpair failed and we were unable to recover it. 00:31:57.351 [2024-04-15 18:18:46.096968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.097409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.097456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.097761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.097969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.097999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.098172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.098417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.098470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.098759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.099071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.099100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.099312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.099501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.099551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.099806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.100081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.100112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.100339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.100583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.100631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.100820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.101018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.101047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.101241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.101417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.101471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.101689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.101835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.101885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.102153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.102433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.102485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.102777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.102990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.103019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.103240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.103437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.103486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.103702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.103899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.103949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.104243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.104507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.104556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.104898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.105177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.105207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.105435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.105664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.105712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.105908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.106123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.106155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.106497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.106805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.106855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.107102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.107357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.107409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.352 [2024-04-15 18:18:46.107749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.108090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.352 [2024-04-15 18:18:46.108121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.352 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.108327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.108555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.108604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.108885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.109125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.109155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.109412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.109648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.109699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.109928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.110142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.110184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.110470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.110726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.110776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.111029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.111275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.111304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.111547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.111792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.111841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.112164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.112401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.112430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.112648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.112880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.112928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.113127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.113342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.113393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.113575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.113775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.113825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.114090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.114310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.114363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.114561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.114783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.114833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.115014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.115391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.115437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.115661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.115920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.115972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.116160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.116396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.116448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.116629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.116884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.116933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.117275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.117567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.117615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.117836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.118026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.118055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.118458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.118727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.118782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.353 qpair failed and we were unable to recover it. 00:31:57.353 [2024-04-15 18:18:46.119003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.353 [2024-04-15 18:18:46.119294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.119341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.119696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.119986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.120039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.120444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.120718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.120769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.121008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.121266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.121313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.121612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.121877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.121929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.122249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.122655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.122707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.123014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.123362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.123409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.123721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.124030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.124094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.124382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.124656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.124706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.124908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.125137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.125169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.125453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.125785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.125833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.126188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.126449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.126503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.126757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.126982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.127011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.127238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.127474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.127525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.127726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.127953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.128003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.128223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.128502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.128551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.128808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.128962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.129000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.129274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.129517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.129568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.129762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.130007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.130036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.130419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.130727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.130779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.130996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.131232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.354 [2024-04-15 18:18:46.131304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.354 qpair failed and we were unable to recover it. 00:31:57.354 [2024-04-15 18:18:46.131561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.131812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.131864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.132120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.132407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.132453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.132752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.132988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.133017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.133296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.133594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.133648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.133865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.134079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.134110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.134263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.134455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.134512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.134851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.135126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.135156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.135405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.135690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.135741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.135932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.136328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.136373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.136617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.136915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.136967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.137331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.137617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.137672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.137971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.138189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.138221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.138420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.138617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.138672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.138908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.139118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.139149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.139391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.139644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.139696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.140013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.140374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.140422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.140785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.141124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.141157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.141527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.141830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.141884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.142054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.142238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.142268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.142455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.142728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.142774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.355 qpair failed and we were unable to recover it. 00:31:57.355 [2024-04-15 18:18:46.143141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.143364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.355 [2024-04-15 18:18:46.143394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.143725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.144035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.144108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.144367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.144704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.144752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.144991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.145185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.145216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.145399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.145578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.145628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.145849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.146066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.146108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.146356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.146544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.146605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.146816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.147023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.147053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.147343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.147575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.147626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.147838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.148022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.148051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.148354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.148528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.148577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.148907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.149186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.149216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.149468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.149638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.149690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.149950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.150284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.150330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.150594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.150908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.150957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.151236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.151546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.151598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.151853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.152121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.152152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.152326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.152524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.152579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.152820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.153029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.153075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.153443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.153755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.153809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.154038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.154341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.154387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.356 qpair failed and we were unable to recover it. 00:31:57.356 [2024-04-15 18:18:46.154675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.356 [2024-04-15 18:18:46.154950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.154999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.357 qpair failed and we were unable to recover it. 00:31:57.357 [2024-04-15 18:18:46.155285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.155543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.155591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.357 qpair failed and we were unable to recover it. 00:31:57.357 [2024-04-15 18:18:46.155896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.156246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.156283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.357 qpair failed and we were unable to recover it. 00:31:57.357 [2024-04-15 18:18:46.156537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.156787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.156838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.357 qpair failed and we were unable to recover it. 00:31:57.357 [2024-04-15 18:18:46.157112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.157406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.157453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.357 qpair failed and we were unable to recover it. 00:31:57.357 [2024-04-15 18:18:46.157719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.157974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.158023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.357 qpair failed and we were unable to recover it. 00:31:57.357 [2024-04-15 18:18:46.158336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.158697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.158749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.357 qpair failed and we were unable to recover it. 00:31:57.357 [2024-04-15 18:18:46.159101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.159382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.159412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.357 qpair failed and we were unable to recover it. 00:31:57.357 [2024-04-15 18:18:46.159565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.159809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.159859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.357 qpair failed and we were unable to recover it. 00:31:57.357 [2024-04-15 18:18:46.160026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.160214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.160244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.357 qpair failed and we were unable to recover it. 00:31:57.357 [2024-04-15 18:18:46.160532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.160847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.160896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.357 qpair failed and we were unable to recover it. 00:31:57.357 [2024-04-15 18:18:46.161147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.161442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.161489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.357 qpair failed and we were unable to recover it. 00:31:57.357 [2024-04-15 18:18:46.161845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.162289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.162342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.357 qpair failed and we were unable to recover it. 00:31:57.357 [2024-04-15 18:18:46.162623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.162875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.162925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.357 qpair failed and we were unable to recover it. 00:31:57.357 [2024-04-15 18:18:46.163111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.163335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.163365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.357 qpair failed and we were unable to recover it. 00:31:57.357 [2024-04-15 18:18:46.163591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.163818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.163868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.357 qpair failed and we were unable to recover it. 00:31:57.357 [2024-04-15 18:18:46.164081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.164372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.357 [2024-04-15 18:18:46.164417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.164731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.164981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.165011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.165363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.165726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.165780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.166074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.166306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.166351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.166670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.166977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.167025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.167277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.167490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.167542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.167765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.168034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.168099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.168332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.168605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.168653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.168819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.169016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.169045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.169213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.169400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.169450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.169675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.169879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.169928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.170219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.170457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.170508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.170731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.171017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.171047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.171260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.171483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.171533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.171783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.171945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.171974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.172139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.172276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.172305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.172502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.172690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.172753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.172937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.173088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.173118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.173309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.173528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.173581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.173772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.174031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.174070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.174292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.174480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.174532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.358 qpair failed and we were unable to recover it. 00:31:57.358 [2024-04-15 18:18:46.174753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.358 [2024-04-15 18:18:46.174923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.174951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.175274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.175525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.175574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.175826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.176054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.176090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.176278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.176481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.176510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.176790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.177055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.177098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.177345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.177604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.177651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.177939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.178227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.178258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.178474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.178751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.178803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.179086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.179383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.179428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.179712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.179982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.180034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.180353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.180616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.180669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.180940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.181159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.181200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.181431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.181618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.181679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.181853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.182038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.182078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.182449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.182741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.182796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.183089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.183412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.183458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.183758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.183988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.184018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.184326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.184613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.184666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.184947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.185130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.185160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.185351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.185578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.185628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.359 qpair failed and we were unable to recover it. 00:31:57.359 [2024-04-15 18:18:46.185846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.185987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.359 [2024-04-15 18:18:46.186017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.186232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.186401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.186446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.186618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.186840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.186891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.187144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.187338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.187387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.187689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.188042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.188119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.188470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.188805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.188857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.189183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.189562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.189608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.189920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.190231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.190292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.190604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.190889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.190940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.191109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.191374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.191426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.191760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.192005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.192034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.192223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.192376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.192437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.192673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.192949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.192997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.193301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.193588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.193639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.193825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.194164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.194195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.194541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.194878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.194929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.195262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.195508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.195559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.195764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.195974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.196003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.196398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.196684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.196736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.196935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.197117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.197148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.197434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.197653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.360 [2024-04-15 18:18:46.197701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.360 qpair failed and we were unable to recover it. 00:31:57.360 [2024-04-15 18:18:46.197889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.198047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.198087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.198242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.198448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.198477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.198635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.198805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.198835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.199002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.199196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.199226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.199378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.199560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.199589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.199748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.199963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.199993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.200165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.200345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.200375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.200594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.200884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.200913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.201120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.201277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.201306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.201486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.201705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.201755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.202021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.202273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.202303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.202496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.202687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.202738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.202916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.203129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.203160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.203349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.203538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.203588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.203787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.203968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.203996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.204249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.204501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.204551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.204753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.204918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.204947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.205120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.205259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.205300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.205499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.205703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.205756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.206001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.206194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.206224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.361 qpair failed and we were unable to recover it. 00:31:57.361 [2024-04-15 18:18:46.206381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.206571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.361 [2024-04-15 18:18:46.206622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.206913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.207126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.207156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.207312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.207536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.207564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.207721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.207915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.207944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.208132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.208303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.208332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.208566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.208737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.208789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.208972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.209190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.209221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.209416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.209579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.209631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.209786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.209918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.209947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.210137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.210286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.210315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.210503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.210684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.210713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.210930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.211131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.211161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.211325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.211588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.211638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.211840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.212049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.212086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.212259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.212489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.212540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.212771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.212993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.213023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.213199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.213374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.213434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.213592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.213781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.213834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.214038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.214256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.214302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.214505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.214752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.214803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.215012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.215243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.215293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.362 qpair failed and we were unable to recover it. 00:31:57.362 [2024-04-15 18:18:46.215537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.362 [2024-04-15 18:18:46.215825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.215875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.216032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.216230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.216261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.216473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.216692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.216742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.216960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.217173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.217204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.217395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.217605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.217657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.217879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.218037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.218075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.218261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.218458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.218510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.218728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.218979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.219009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.219257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.219470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.219519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.219704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.219985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.220037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.220299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.220537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.220590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.220846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.221122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.221153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.221384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.221630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.221683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.221981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.222332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.222378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.222672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.222945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.222997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.223253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.223560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.223613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.223885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.224244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.224315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.224663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.224944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.224995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.225263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.225507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.225556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.363 qpair failed and we were unable to recover it. 00:31:57.363 [2024-04-15 18:18:46.225831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.226037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.363 [2024-04-15 18:18:46.226075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.364 qpair failed and we were unable to recover it. 00:31:57.364 [2024-04-15 18:18:46.226305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.226460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.226511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.364 qpair failed and we were unable to recover it. 00:31:57.364 [2024-04-15 18:18:46.226702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.226891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.226942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.364 qpair failed and we were unable to recover it. 00:31:57.364 [2024-04-15 18:18:46.227122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.227269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.227328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.364 qpair failed and we were unable to recover it. 00:31:57.364 [2024-04-15 18:18:46.227522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.227760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.227811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.364 qpair failed and we were unable to recover it. 00:31:57.364 [2024-04-15 18:18:46.228025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.228198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.228229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.364 qpair failed and we were unable to recover it. 00:31:57.364 [2024-04-15 18:18:46.228436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.228726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.228776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.364 qpair failed and we were unable to recover it. 00:31:57.364 [2024-04-15 18:18:46.228953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.229149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.229179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.364 qpair failed and we were unable to recover it. 00:31:57.364 [2024-04-15 18:18:46.229321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.229525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.229574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.364 qpair failed and we were unable to recover it. 00:31:57.364 [2024-04-15 18:18:46.229736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.229914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.229943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.364 qpair failed and we were unable to recover it. 00:31:57.364 [2024-04-15 18:18:46.230183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.230372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.230429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.364 qpair failed and we were unable to recover it. 00:31:57.364 [2024-04-15 18:18:46.230628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.230817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.230876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.364 qpair failed and we were unable to recover it. 00:31:57.364 [2024-04-15 18:18:46.231079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.231302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.231358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.364 qpair failed and we were unable to recover it. 00:31:57.364 [2024-04-15 18:18:46.231619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.231870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.231922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.364 qpair failed and we were unable to recover it. 00:31:57.364 [2024-04-15 18:18:46.232146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.232318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.232366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.364 qpair failed and we were unable to recover it. 00:31:57.364 [2024-04-15 18:18:46.232622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.232938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.232987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.364 qpair failed and we were unable to recover it. 00:31:57.364 [2024-04-15 18:18:46.233244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.233462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.233515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.364 qpair failed and we were unable to recover it. 00:31:57.364 [2024-04-15 18:18:46.233709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.233917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.233946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.364 qpair failed and we were unable to recover it. 00:31:57.364 [2024-04-15 18:18:46.234161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.364 [2024-04-15 18:18:46.234336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.234391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.234603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.234796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.234846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.235070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.235252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.235282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.235487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.235665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.235716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.235911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.236201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.236231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.236476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.236700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.236750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.236950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.237184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.237214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.237434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.237627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.237682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.237897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.238089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.238119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.238290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.238482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.238533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.238747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.238958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.238987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.239212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.239506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.239557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.239767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.240067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.240097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.240456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.240685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.240738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.240939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.241131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.241163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.241434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.241682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.241733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.242054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.242350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.242381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.242561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.242765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.242821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.242988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.243206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.243236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.243457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.243664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.243715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.243906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.244080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.365 [2024-04-15 18:18:46.244128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.365 qpair failed and we were unable to recover it. 00:31:57.365 [2024-04-15 18:18:46.244325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.244545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.244593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.244846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.245020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.245049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.245283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.245492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.245543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.245732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.245915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.245944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.246109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.246374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.246420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.246705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.246942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.246971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.247202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.247428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.247483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.247740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.247935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.247964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.248154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.248422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.248472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.248756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.249080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.249110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.249335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.249543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.249572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.249720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.250009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.250039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.250265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.250533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.250582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.250916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.251227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.251258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.251438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.251698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.251746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.252005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.252206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.252247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.252428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.252681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.252733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.253052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.253293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.253322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.253556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.253830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.253880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.254131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.254316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.254345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.366 qpair failed and we were unable to recover it. 00:31:57.366 [2024-04-15 18:18:46.254530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.254747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.366 [2024-04-15 18:18:46.254798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.254990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.255192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.255223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.255484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.255676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.255727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.255890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.256083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.256113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.256343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.256602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.256653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.256910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.257207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.257237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.257461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.257665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.257715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.258005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.258242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.258272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.258565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.258851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.258903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.259173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.259340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.259369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.259591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.259784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.259833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.260013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.260279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.260309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.260601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.260814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.260863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.261173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.261439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.261488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.261754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.262001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.262053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.262410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.262604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.262657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.262987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.263264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.263295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.263587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.263878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.263928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.264112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.264380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.264431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.264671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.264916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.264968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.265243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.265564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.367 [2024-04-15 18:18:46.265613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.367 qpair failed and we were unable to recover it. 00:31:57.367 [2024-04-15 18:18:46.265878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.266119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.266150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.266370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.266569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.266618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.266817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.267025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.267054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.267273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.267463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.267510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.267689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.267940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.267990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.268189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.268423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.268471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.268644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.268863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.268915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.269181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.269466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.269518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.269735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.269970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.269998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.270175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.270362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.270421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.270610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.270869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.270921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.271215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.271422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.271452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.271624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.271792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.271821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.271970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.272144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.272174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.272342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.272537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.272566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.272756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.272956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.272986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.273162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.273335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.273365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.273506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.273689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.273717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.273858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.274044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.274082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.274231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.274377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.274417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.368 [2024-04-15 18:18:46.274621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.274765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.368 [2024-04-15 18:18:46.274796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.368 qpair failed and we were unable to recover it. 00:31:57.369 [2024-04-15 18:18:46.274964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.369 [2024-04-15 18:18:46.275116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.369 [2024-04-15 18:18:46.275146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.369 qpair failed and we were unable to recover it. 00:31:57.369 [2024-04-15 18:18:46.275326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.275494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.275524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.646 qpair failed and we were unable to recover it. 00:31:57.646 [2024-04-15 18:18:46.275694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.275862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.275891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.646 qpair failed and we were unable to recover it. 00:31:57.646 [2024-04-15 18:18:46.276068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.276212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.276241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.646 qpair failed and we were unable to recover it. 00:31:57.646 [2024-04-15 18:18:46.276389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.276554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.276583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.646 qpair failed and we were unable to recover it. 00:31:57.646 [2024-04-15 18:18:46.276757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.276894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.276924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.646 qpair failed and we were unable to recover it. 00:31:57.646 [2024-04-15 18:18:46.277081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.277223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.277253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.646 qpair failed and we were unable to recover it. 00:31:57.646 [2024-04-15 18:18:46.277447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.277587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.277617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.646 qpair failed and we were unable to recover it. 00:31:57.646 [2024-04-15 18:18:46.277784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.277924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.277954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.646 qpair failed and we were unable to recover it. 00:31:57.646 [2024-04-15 18:18:46.278090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.278259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.278289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.646 qpair failed and we were unable to recover it. 00:31:57.646 [2024-04-15 18:18:46.278435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.278576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.278606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.646 qpair failed and we were unable to recover it. 00:31:57.646 [2024-04-15 18:18:46.278756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.278901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.278930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.646 qpair failed and we were unable to recover it. 00:31:57.646 [2024-04-15 18:18:46.279082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.279226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.279256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.646 qpair failed and we were unable to recover it. 00:31:57.646 [2024-04-15 18:18:46.279424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.279552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.279582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.646 qpair failed and we were unable to recover it. 00:31:57.646 [2024-04-15 18:18:46.279740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.279907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.279941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.646 qpair failed and we were unable to recover it. 00:31:57.646 [2024-04-15 18:18:46.280128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.280297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.280326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.646 qpair failed and we were unable to recover it. 00:31:57.646 [2024-04-15 18:18:46.280472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.280668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.280700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.646 qpair failed and we were unable to recover it. 00:31:57.646 [2024-04-15 18:18:46.280949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.281119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.281148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.646 qpair failed and we were unable to recover it. 00:31:57.646 [2024-04-15 18:18:46.281391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.281528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.646 [2024-04-15 18:18:46.281559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.646 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.281701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.281871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.281900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.282043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.282221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.282251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.282379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.282563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.282592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.282734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.282925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.282955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.283151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.283316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.283349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.283506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.283658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.283687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.283850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.284099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.284130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.284283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.284444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.284473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.284638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.284805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.284834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.285000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.285171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.285203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.285341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.285505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.285537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.285701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.285877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.285906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.286045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.286218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.286247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.286386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.286547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.286576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.286718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.286907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.286937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.287087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.287224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.287254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.287427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.287590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.287620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.287783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.287946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.287975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.288178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.288326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.288356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.288501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.288671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.288701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.288866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.289029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.289071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.289209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.289365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.289394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.289586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.289729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.289761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.289956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.290143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.290174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.290321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.290482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.290522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.290704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.290844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.290873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.291007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.291190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.291220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.291387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.291556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.291588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.647 qpair failed and we were unable to recover it. 00:31:57.647 [2024-04-15 18:18:46.291766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.291934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.647 [2024-04-15 18:18:46.291963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.292125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.292292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.292322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.292512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.292665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.292694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.292848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.293042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.293080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.293227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.293367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.293396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.293566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.293705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.293734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.293926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.294086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.294117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.294284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.294452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.294482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.294679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.294830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.294860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.295051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.295243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.295273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.295437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.295607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.295637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.295799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.295966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.295996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.296189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.296353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.296382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.296544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.296710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.296739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.296929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.297073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.297103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.297270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.297430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.297459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.297635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.297813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.297845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.298014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.301184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.301222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.301387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.301596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.301627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.301794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.301976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.302006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.302164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.302341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.302371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.302535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.302706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.302736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.302906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.303045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.303086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.303252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.303421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.303451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.303613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.303808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.303837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.303984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.304152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.304183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.304355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.304548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.304580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.304753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.304927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.304959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.305127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.305295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.305325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.648 qpair failed and we were unable to recover it. 00:31:57.648 [2024-04-15 18:18:46.305496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.648 [2024-04-15 18:18:46.305630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.305658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.305902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.306077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.306107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.306263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.306451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.306482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.306649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.306794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.306824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.306985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.307154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.307185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.307342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.307513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.307545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.307708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.307879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.307909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.308101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.308265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.308295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.308537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.308679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.308709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.308874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.309036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.309093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.309239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.309434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.309465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.309651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.309840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.309870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.310007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.310166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.310196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.310361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.310531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.310560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.310758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.310944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.310974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.311134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.311303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.311332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.311469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.311648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.311677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.311859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.312050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.312090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.312232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.312406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.312438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.312632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.312811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.312846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.313023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.313206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.313235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.313428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.313586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.313616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.313788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.313963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.313992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.314156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.314340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.314370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.314560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.314741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.314770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.314965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.315128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.315157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.315322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.315520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.315560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.315755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.315898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.315927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.316067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.316238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.649 [2024-04-15 18:18:46.316267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.649 qpair failed and we were unable to recover it. 00:31:57.649 [2024-04-15 18:18:46.316437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.316630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.316663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.316844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.317008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.317037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.317228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.317370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.317399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.317594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.317805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.317834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.318016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.318233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.318266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.318493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.318684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.318712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.318859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.319053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.319092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.319258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.319424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.319453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.319668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.319848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.319880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.320036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.320211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.320241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.320462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.320637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.320667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.320828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.321036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.321104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.321255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.321400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.321430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.321639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.321806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.321836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.321982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.322145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.322176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.322347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.322508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.322537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.322729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.322921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.322949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.323133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.323274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.323303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.323474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.323657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.323686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.323857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.324049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.324087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.324294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.324479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.324508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.324709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.324892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.324922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.325094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.325241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.325271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.650 qpair failed and we were unable to recover it. 00:31:57.650 [2024-04-15 18:18:46.325414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.325552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.650 [2024-04-15 18:18:46.325581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.325745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.325925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.325954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.326115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.326282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.326312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.326445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.326679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.326709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.326936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.327117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.327147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.327315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.327512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.327541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.327749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.327908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.327937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.328146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.328294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.328325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.328474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.328643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.328673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.328859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.329028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.329077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.329231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.329396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.329425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.329582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.329750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.329778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.329929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.330093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.330123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.330263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.330433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.330465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.330638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.330832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.330871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.331051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.331238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.331267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.331459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.331619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.331656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.331866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.332046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.332084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.332260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.332435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.332464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.332661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.332830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.332859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.333055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.333228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.333257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.333391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.333520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.333549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.333713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.333883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.333912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.334101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.334235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.334265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.334409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.334603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.334632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.334809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.334994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.335023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.335226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.335395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.335428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.335569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.335715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.335744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.335892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.336034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.336071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.651 qpair failed and we were unable to recover it. 00:31:57.651 [2024-04-15 18:18:46.336266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.651 [2024-04-15 18:18:46.336425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.336455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.336584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.336746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.336786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.336985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.337142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.337172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.337317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.337481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.337510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.337652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.337825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.337854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.337996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.338174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.338204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.338379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.338527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.338557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.338725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.338856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.338885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.339072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.339254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.339283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.339462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.339658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.339687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.339836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.340053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.340095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.340243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.340384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.340413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.340576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.340801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.340842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.341054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.341240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.341269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.341461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.341653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.341683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.341879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.342017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.342046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.342254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.342449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.342479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.342642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.342809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.342837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.342975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.343127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.343157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.343347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.343506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.343536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.343715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.343894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.343923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.344122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.344288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.344317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.344539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.344731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.344760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.344938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.345144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.345174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.345342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.345485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.345514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.345681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.345857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.345886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.346019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.346226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.346256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.346430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.346601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.346630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.652 [2024-04-15 18:18:46.346776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.346975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.652 [2024-04-15 18:18:46.347004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.652 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.347225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.347388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.347418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.347604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.347759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.347789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.347963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.348160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.348191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.348324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.348469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.348499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.348662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.348802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.348831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.349000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.349172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.349202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.349335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.349520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.349551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.349720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.349902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.349932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.350086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.350262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.350291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.350483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.350642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.350671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.350868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.351041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.351081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.351226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.351416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.351445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.351584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.351754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.351784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.351921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.352124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.352154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.352290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.352459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.352487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.352628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.352801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.352830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.353013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.353209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.353239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.353437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.353620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.353649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.353837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.354004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.354033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.354235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.354428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.354459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.354595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.354740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.354770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.354941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.355076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.355106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.355272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.355441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.355470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.355667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.355806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.355834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.355995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.356172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.356205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.356350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.356481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.356511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.356643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.356835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.356873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.357041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.357183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.357213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.653 qpair failed and we were unable to recover it. 00:31:57.653 [2024-04-15 18:18:46.357374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.357562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.653 [2024-04-15 18:18:46.357592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.357783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.357935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.357965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.358155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.358352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.358382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.358611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.358824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.358857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.359037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.359219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.359249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.359442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.359615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.359644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.359850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.360043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.360080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.360229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.360440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.360472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.360641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.360806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.360836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.361033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.361212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.361242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.361409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.361551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.361580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.361715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.361884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.361913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.362074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.362252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.362282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.362433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.362574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.362602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.362821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.362988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.363017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.363180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.363365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.363394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.363602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.363776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.363805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.363991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.364190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.364220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.364416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.364611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.364641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.364810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.364949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.364979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.365164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.365358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.365387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.365529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.365700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.365730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.365900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.366098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.366132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.366277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.366470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.366499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.366693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.366855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.366884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.367081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.367251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.367280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.367458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.367611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.367640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.367824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.368007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.368036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.368201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.368358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.368387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.654 [2024-04-15 18:18:46.368578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.368761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.654 [2024-04-15 18:18:46.368791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.654 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.368960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.369104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.369134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.369277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.369442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.369472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.369639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.369836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.369869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.370006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.370169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.370199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.370343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.370512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.370542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.370715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.370938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.370967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.371163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.371337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.371366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.371560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.371710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.371739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.371925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.372101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.372134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.372275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.372419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.372449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.372614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.372778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.372807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.372972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.373115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.373145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.373302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.373499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.373533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.373708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.373891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.373932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.374132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.374301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.374331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.374473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.374622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.374651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.374818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.374960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.374989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.375126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.375297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.375327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.375496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.375692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.375720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.375862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.376110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.376140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.376281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.376419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.376451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.376617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.376784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.376816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.376984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.377183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.377218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.655 [2024-04-15 18:18:46.377402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.377573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.655 [2024-04-15 18:18:46.377603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.655 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.377775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.377924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.377954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.378124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.378265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.378295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.378440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.378608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.378637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.378774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.378918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.378947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.379109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.379244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.379274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.379446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.379607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.379648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.379856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.380066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.380107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.380282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.380416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.380451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.380675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.380806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.380834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.381027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.381193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.381223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.382151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.382309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.382340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.382506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.382644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.382674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.382843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.382987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.383017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.383173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.383312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.383342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.383538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.383682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.383712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.383903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.384106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.384136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.384280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.384415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.384445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.384636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.384823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.384854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72f0000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Write completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Write completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Write completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Write completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Write completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 Read completed with error (sct=0, sc=8) 00:31:57.656 starting I/O failed 00:31:57.656 [2024-04-15 18:18:46.385382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.656 [2024-04-15 18:18:46.385551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.385753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.385785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.385982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.386175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.386206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.386351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.386518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.656 [2024-04-15 18:18:46.386547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.656 qpair failed and we were unable to recover it. 00:31:57.656 [2024-04-15 18:18:46.386708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.386855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.386884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.387029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.387225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.387255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.387416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.387580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.387609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.387786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.388028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.388064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.388206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.388351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.388380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.388582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.388780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.388810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.388991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.389152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.389183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.389326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.389465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.389494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.389661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.389794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.389824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.390018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.390189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.390219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.390366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.390534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.390564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.391421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.391610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.391641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.392491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.392709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.392749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.393517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.393692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.393723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.393899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.394075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.394106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.394245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.394393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.394422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.394655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.394804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.394834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.395013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.395204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.395234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.395407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.395623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.395663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.395827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.395975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.396004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.396169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.396972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.397006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.397167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.397927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.397960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.398109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.398284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.398314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.398458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.398651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.398680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.398844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.399024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.399053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.399202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.399346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.399375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.399569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.399744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.399773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.399945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.400087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.400117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.657 [2024-04-15 18:18:46.400263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.400442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.657 [2024-04-15 18:18:46.400471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.657 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.400615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.400783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.400812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.400975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.401122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.401151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.401298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.401468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.401497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.401686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.401853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.401882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.402064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.402206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.402235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.402403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.402571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.402600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.402741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.402909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.402938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.403124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.403289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.403318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.403470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.403642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.403671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.403844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.404010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.404039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.404195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.404360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.404390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.404530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.404667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.404696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.404860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.405028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.405057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.405210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.405384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.405414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.405562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.405706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.405735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.405899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.406087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.406118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.406288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.406451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.406481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.406624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.406818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.406847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.406986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.407157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.407187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.407331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.407478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.407508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.407672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.407841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.407870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.408039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.408232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.408263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.408439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.408580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.408609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.408781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.408924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.408954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.409126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.410141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.410176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.410340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.410555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.410584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.410791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.410967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.410996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.411180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.411324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.411353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.658 [2024-04-15 18:18:46.411545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.411702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.658 [2024-04-15 18:18:46.411731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.658 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.411959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.412133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.412163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.412297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.412465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.412495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.412663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.412835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.412864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.413028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.413171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.413201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.413375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.413537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.413566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.413728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.413886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.413915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.414146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.414280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.414320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.414491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.414724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.414753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.414932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.415149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.415178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.415316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.415454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.415483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.415619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.415767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.415797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.415940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.416107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.416138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.416282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.416424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.416453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.416623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.416820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.416849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.417007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.417178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.417209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.417360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.417556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.417585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.417750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.417918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.417947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.418117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.418268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.418297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.418496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.418689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.418718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.418881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.419019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.419048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.419202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.419386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.419415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.419581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.419777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.419806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.419948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.420083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.420113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.420261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.420432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.420461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.420633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.420820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.420849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.421013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.421203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.421232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.421399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.421595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.421643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.421815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.422006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.422036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.659 qpair failed and we were unable to recover it. 00:31:57.659 [2024-04-15 18:18:46.422237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.659 [2024-04-15 18:18:46.422412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.422441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.422587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.422778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.422807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.422974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.423140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.423170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.423307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.423475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.423504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.423695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.423860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.423889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.424076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.424247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.424276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.424474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.424641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.424670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.424841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.424988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.425018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.425176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.425331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.425365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.425541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.425706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.425735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.425905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.426052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.426096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.426250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.426442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.426471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.426663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.426810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.426840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.427008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.427185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.427215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.427417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.427584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.427613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.427785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.427924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.427953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.428115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.428265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.428294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.428491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.428646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.428693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.428838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.429009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.429038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.429216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.429357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.429386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.429582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.429715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.429744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.429923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.430102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.430132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.430302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.430509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.430555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.430697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.430865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.430895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.431070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.431215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.431244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.660 [2024-04-15 18:18:46.431411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.431574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.660 [2024-04-15 18:18:46.431603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.660 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.431768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.431909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.431937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.432109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.432280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.432309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.432457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.432641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.432670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.432839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.433011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.433041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.433187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.433336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.433366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.433531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.433703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.433732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.433899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.434070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.434100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.434239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.434416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.434446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.434616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.434755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.434784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.434966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.435129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.435159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.435329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.435483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.435512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.435726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.435938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.435971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.436142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.436305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.436333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.436529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.436666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.436694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.436834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.437001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.437030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.437226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.437438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.437489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.437673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.437869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.437898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.438096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.438248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.438278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.438465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.438629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.438658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.438804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.438948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.438977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.439153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.439298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.439328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.439523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.439687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.439721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.439887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.440051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.440098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.440295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.440488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.440518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.440685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.440881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.440910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.441084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.441255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.441284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.441417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.441578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.441608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.441779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.441946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.441975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.661 [2024-04-15 18:18:46.442140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.442306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.661 [2024-04-15 18:18:46.442336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.661 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.442496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.442659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.442688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.442820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.442955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.442984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.443133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.443296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.443330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.443493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.443660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.443689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.443884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.444078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.444108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.444270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.444464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.444493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.444689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.444862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.444891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.445064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.445237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.445266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.445436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.445602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.445631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.445812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.445955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.445984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.446149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.446307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.446336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.446536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.446702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.446732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.446898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.447091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.447125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.447268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.447432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.447461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.447652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.447830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.447860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.448026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.448205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.448235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.448404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.448543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.448572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.448767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.448930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.448960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.449123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.449289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.449319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.449513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.449682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.449711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.449846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.450035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.450069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.450214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.450357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.450386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.450553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.450692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.450721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.450867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.451068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.451098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.451262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.451427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.451456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.451618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.451753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.451783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.451919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.452091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.452121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.452292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.452489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.452518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.662 [2024-04-15 18:18:46.452710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.452869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.662 [2024-04-15 18:18:46.452898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.662 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.453036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.453213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.453243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.453415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.453576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.453605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.453775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.453970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.453999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.454177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.454318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.454347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.454562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.454717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.454746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.454939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.455117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.455146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.455314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.455482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.455510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.455674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.455845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.455874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.456038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.456212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.456242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.456393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.456585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.456614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.456781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.456915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.456944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.457118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.457309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.457339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.457502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.457670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.457699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.457837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.457974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.458003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.458181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.458333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.458362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.458522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.458717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.458747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.458886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.459044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.459079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.459222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.459387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.459416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.459582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.459752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.459781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.459945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.460110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.460140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.460309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.460501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.460530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.460718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.460884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.460913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.461114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.461287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.461316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.461521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.461683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.461711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.461881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.462018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.462047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.462241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.462434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.462464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.462606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.462764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.462793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.462930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.463106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.463136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.463276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.463476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.463505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.463671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.463862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.463891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.663 qpair failed and we were unable to recover it. 00:31:57.663 [2024-04-15 18:18:46.464030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.663 [2024-04-15 18:18:46.464229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.464259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.464402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.464568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.464597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.464762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.464903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.464932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.465106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.465274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.465303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.465473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.465638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.465667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.465861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.466028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.466063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.466232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.466462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.466491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.466685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.466877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.466906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.467070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.467241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.467271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.467427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.467591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.467620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.467792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.467959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.467988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.468129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.468322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.468351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.468485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.468654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.468683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.468840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.469001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.469030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.469198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.469402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.469430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.469598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.469797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.469825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.470006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.470154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.470185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.470375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.470572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.470601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.470750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.470885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.470914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.471109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.471278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.471307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.471476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.471623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.471652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.471825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.471996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.472024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.472170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.472305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.472333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.472537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.472707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.472736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.472904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.473081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.473111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.473275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.473467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.473495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.473689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.473857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.473885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.664 qpair failed and we were unable to recover it. 00:31:57.664 [2024-04-15 18:18:46.474077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.474247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.664 [2024-04-15 18:18:46.474276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.474477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.474671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.474701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.474869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.475071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.475100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.475240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.475403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.475432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.475569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.475762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.475791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.475928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.476095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.476125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.476287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.476456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.476485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.476654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.476827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.476857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.477023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.477202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.477233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.477432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.477600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.477629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.477826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.477988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.478017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.478206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.478370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.478399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.478537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.478702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.478731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.478897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.479053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.479088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.479229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.479398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.479427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.479594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.479762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.479791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.479959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.480110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.480140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.480308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.480505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.480535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.480701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.480893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.480923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.481103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.481250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.481280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.481461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.481661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.481690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.481824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.482018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.482047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.482223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.482414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.482444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.482635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.482802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.482832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.482997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.483188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.483218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.483375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.483565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.483594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.483734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.483874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.483903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.484043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.484192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.484222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.484409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.484595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.484625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.484796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.484962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.484992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.485171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.485338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.485367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.485534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.485699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.665 [2024-04-15 18:18:46.485728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.665 qpair failed and we were unable to recover it. 00:31:57.665 [2024-04-15 18:18:46.485892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.486032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.486067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.486262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.486433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.486462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.486656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.486823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.486852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.487024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.487198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.487228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.487402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.487587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.487616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.487786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.487932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.487962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.488132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.488270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.488299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.488465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.488656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.488685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.488876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.489018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.489048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.489253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.489409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.489439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.489604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.489768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.489797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.489964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.490141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.490171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.490368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.490514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.490543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.490686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.490853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.490882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.491046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.491227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.491256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.491390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.491574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.491608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.491777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.491970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.491999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.492167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.492334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.492364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.492532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.492688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.492717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.492887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.493070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.493114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.493311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.493507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.493536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.493711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.493857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.493886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.494023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.494202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.494232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.494427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.494622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.494651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.494807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.494993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.495023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.495202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.495383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.495417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.495611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.495780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.495809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.495950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.496121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.496152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.496315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.496451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.496480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.496676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.496847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.496876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.497066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.497227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.497256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.666 qpair failed and we were unable to recover it. 00:31:57.666 [2024-04-15 18:18:46.497453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.497621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.666 [2024-04-15 18:18:46.497650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.497840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.498003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.498032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.498219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.498399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.498428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.498594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.498778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.498807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.498986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.499151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.499185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.499386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.499546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.499576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.499738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.499939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.499968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.500129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.500305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.500334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.500528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.500720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.500750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.500893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.501037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.501072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.501253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.501452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.501482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.501621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.501789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.501818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.501957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.502128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.502158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.502323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.502482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.502512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.502679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.502844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.502878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.503042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.503227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.503257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.503426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.503567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.503597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.503789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.503956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.503985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.504176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.504357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.504386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.504549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.504711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.504740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.504934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.505119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.505149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.505321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.505517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.505546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.505687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.505854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.505884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.506082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.506247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.506276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.506426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.506620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.506649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.506823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.506963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.506992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.507164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.507358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.507386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.507552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.507718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.507747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.507942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.508111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.508141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.508331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.508501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.508530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.508701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.508859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.508889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.509134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.509300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.667 [2024-04-15 18:18:46.509329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.667 qpair failed and we were unable to recover it. 00:31:57.667 [2024-04-15 18:18:46.509469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.509661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.509690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.509833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.509997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.510026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.510205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.510338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.510368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.510514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.510677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.510706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.510841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.510981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.511010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.511185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.511347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.511377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.511578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.511724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.511753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.511890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.512084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.512121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.512261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.512425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.512454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.512655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.512819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.512849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.513013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.513153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.513183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.513355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.513521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.513550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3462365 Killed "${NVMF_APP[@]}" "$@" 00:31:57.668 [2024-04-15 18:18:46.513714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.513887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.513921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 18:18:46 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:31:57.668 [2024-04-15 18:18:46.514083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 18:18:46 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:57.668 [2024-04-15 18:18:46.514221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.514251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 18:18:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 18:18:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:57.668 [2024-04-15 18:18:46.514422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 18:18:46 -- common/autotest_common.sh@10 -- # set +x 00:31:57.668 [2024-04-15 18:18:46.514582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.514611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.514857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.515024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.515053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.515217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.515414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.515443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.515610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.515756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.515785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.515947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.516132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.516162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.516359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.516525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.516555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.516722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.516889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.516918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.517090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.517258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.517288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 [2024-04-15 18:18:46.517485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 18:18:46 -- nvmf/common.sh@470 -- # nvmfpid=3462932 00:31:57.668 [2024-04-15 18:18:46.517654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.517683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 18:18:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:57.668 18:18:46 -- nvmf/common.sh@471 -- # waitforlisten 3462932 00:31:57.668 [2024-04-15 18:18:46.517828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 18:18:46 -- common/autotest_common.sh@817 -- # '[' -z 3462932 ']' 00:31:57.668 [2024-04-15 18:18:46.517966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 [2024-04-15 18:18:46.517995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.668 qpair failed and we were unable to recover it. 00:31:57.668 18:18:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.668 [2024-04-15 18:18:46.518143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.668 18:18:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:57.668 [2024-04-15 18:18:46.518310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.518339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 18:18:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:57.669 18:18:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:57.669 [2024-04-15 18:18:46.518524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 18:18:46 -- common/autotest_common.sh@10 -- # set +x 00:31:57.669 [2024-04-15 18:18:46.518687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.518716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.518910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.519078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.519108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.519250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.519428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.519457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.519640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.519816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.519845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.520027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.520219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.520248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.520423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.520590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.520619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.520787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.520959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.520988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.521153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.521317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.521346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.521543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.521669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.521697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.521903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.522072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.522102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.522279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.522469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.522498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.522667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.522837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.522866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.523008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.523210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.523240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.523419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.523612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.523642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.523794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.523930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.523957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.524138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.524304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.524333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.524486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.524675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.524703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.524869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.525037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.525074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.525221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.525362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.525391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.525533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.525697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.525725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.525892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.526030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.526067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.526260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.526426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.526454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.526619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.526785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.526814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.526978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.527115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.527145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.527311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.527444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.527473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.527621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.527785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.527814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.528007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.528153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.528182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.528353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.528494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.528523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.528688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.528829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.528857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.528991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.529127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.529156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.529334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.529494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.529523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.669 qpair failed and we were unable to recover it. 00:31:57.669 [2024-04-15 18:18:46.529682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.669 [2024-04-15 18:18:46.529831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.529859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.530073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.530220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.530248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.530407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.530572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.530600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.530771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.530933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.530962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.531150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.531317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.531346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.531539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.531681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.531709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.531874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.532008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.532037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.532197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.532371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.532399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.532556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.532719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.532748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.532906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.533149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.533178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.533317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.533450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.533479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.533640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.533833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.533862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.534089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.534229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.534257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.534418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.534550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.534579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.534790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.534987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.535015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.535211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.535387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.535415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.535610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.535795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.535824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.536026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.536218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.536248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.536445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.536612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.536641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.536811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.536973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.537002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.537165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.537331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.537360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.537521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.537686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.537714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.537903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.538066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.538095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.538264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.538401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.538429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.538598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.538791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.538820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.538979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.539129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.539158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.539321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.539480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.539509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.539699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.539862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.539890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.540054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.540223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.540251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.540413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.540618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.540646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.540894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.541117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.541146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.541341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.541511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.541539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.541702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.541890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.541918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.542099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.542289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.542317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.670 qpair failed and we were unable to recover it. 00:31:57.670 [2024-04-15 18:18:46.542512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.670 [2024-04-15 18:18:46.542673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.542701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.542840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.542980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.543008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.543164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.543335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.543364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.543495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.543661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.543690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.543858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.544028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.544056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.544246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.544405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.544433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.544600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.544768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.544796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.544948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.545090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.545120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.545316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.545476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.545504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.545677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.545817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.545845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.546040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.546237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.546266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.546438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.546659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.546687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.546849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.547010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.547038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.547186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.547349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.547377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.547542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.547703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.547731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.547891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.548054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.548091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.548252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.548442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.548470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.548690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.548875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.548903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.549098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.549287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.549315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.549501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.549692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.549721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.549915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.550085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.550114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.550311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.550518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.550547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.550755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.550954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.550983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.551163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.551329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.551358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.551562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.551731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.551759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.551908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.552063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.552093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.552276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.552464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.552493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.552635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.552847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.552876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.553029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.553207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.553236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.553425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.553585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.553613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.553758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.553955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.553983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.554181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.554384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.554412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.554575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.554792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.554820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.555132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.555287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.555315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.671 qpair failed and we were unable to recover it. 00:31:57.671 [2024-04-15 18:18:46.555555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.671 [2024-04-15 18:18:46.555732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.555761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.555929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.556091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.556121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.556291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.556473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.556501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.556725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.556871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.556900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.557122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.557308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.557337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.557520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.557736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.557764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.557907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.558083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.558112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.558272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.558450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.558478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.558612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.558779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.558807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.558955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.559141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.559171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.559381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.559547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.559576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.559733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.559866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.559895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.560098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.560269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.560298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.560502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.560639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.560668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.560832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.561008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.561045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.561249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.561509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.561537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.561683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.561937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.561970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.562159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.562324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.562353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.562519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.562652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.562681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.562851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.563010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.563038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.563241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.563409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.563438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.563601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.563762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.563791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.563961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.564100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.564130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.564156] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:31:57.672 [2024-04-15 18:18:46.564232] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:57.672 [2024-04-15 18:18:46.564318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.564509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.564536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.564798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.564991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.565019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.565210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.565370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.565403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.565563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.565755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.565783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.565973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.566113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.566143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.672 [2024-04-15 18:18:46.566340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.566466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.672 [2024-04-15 18:18:46.566495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.672 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.566662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.566825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.566854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.567025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.567168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.567197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.567368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.567529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.567558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.567753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.567946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.567974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.568162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.568329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.568358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.568525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.568688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.568717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.568875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.569013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.569042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.569245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.569405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.569434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.569625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.569793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.569822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.569988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.570151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.570180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.570376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.570534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.570562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.570699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.570893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.570921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.571139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.571364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.571392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.571633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.571827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.571855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.572049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.572199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.572228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.572396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.572588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.572617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.572787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.572986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.573015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.573211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.573440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.573468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.573664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.573879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.573908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.574054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.574226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.574255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.574417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.574581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.574609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.574774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.574968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.574996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.575221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.575410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.575438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.575621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.575789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.575817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.576009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.576211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.576241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.576494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.576723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.576752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.576969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.577112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.577141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.577289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.577429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.577457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.577616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.577818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.577846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.578071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.578219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.578248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.578441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.578611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.578639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.578830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.578963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.578991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.579122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.579317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.579345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.673 qpair failed and we were unable to recover it. 00:31:57.673 [2024-04-15 18:18:46.579546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.673 [2024-04-15 18:18:46.579724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.579753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.674 qpair failed and we were unable to recover it. 00:31:57.674 [2024-04-15 18:18:46.579935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.580097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.580126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.674 qpair failed and we were unable to recover it. 00:31:57.674 [2024-04-15 18:18:46.580321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.580453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.580481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.674 qpair failed and we were unable to recover it. 00:31:57.674 [2024-04-15 18:18:46.580651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.580812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.580841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.674 qpair failed and we were unable to recover it. 00:31:57.674 [2024-04-15 18:18:46.581041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.581240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.581269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.674 qpair failed and we were unable to recover it. 00:31:57.674 [2024-04-15 18:18:46.581490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.581696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.581724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.674 qpair failed and we were unable to recover it. 00:31:57.674 [2024-04-15 18:18:46.581896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.582077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.582105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.674 qpair failed and we were unable to recover it. 00:31:57.674 [2024-04-15 18:18:46.582347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.582569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.582597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.674 qpair failed and we were unable to recover it. 00:31:57.674 [2024-04-15 18:18:46.582910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.583216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.583245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.674 qpair failed and we were unable to recover it. 00:31:57.674 [2024-04-15 18:18:46.583508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.583716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.674 [2024-04-15 18:18:46.583745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.674 qpair failed and we were unable to recover it. 00:31:57.674 [2024-04-15 18:18:46.583951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.944 [2024-04-15 18:18:46.584115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.944 [2024-04-15 18:18:46.584145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.944 qpair failed and we were unable to recover it. 00:31:57.944 [2024-04-15 18:18:46.584343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.944 [2024-04-15 18:18:46.584536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.944 [2024-04-15 18:18:46.584565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.944 qpair failed and we were unable to recover it. 00:31:57.944 [2024-04-15 18:18:46.584765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.944 [2024-04-15 18:18:46.584958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.944 [2024-04-15 18:18:46.584986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.944 qpair failed and we were unable to recover it. 00:31:57.944 [2024-04-15 18:18:46.585129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.944 [2024-04-15 18:18:46.585305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.944 [2024-04-15 18:18:46.585333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.944 qpair failed and we were unable to recover it. 00:31:57.944 [2024-04-15 18:18:46.585504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.944 [2024-04-15 18:18:46.585665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.944 [2024-04-15 18:18:46.585693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.944 qpair failed and we were unable to recover it. 00:31:57.944 [2024-04-15 18:18:46.585889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.944 [2024-04-15 18:18:46.586050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.944 [2024-04-15 18:18:46.586087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.944 qpair failed and we were unable to recover it. 00:31:57.944 [2024-04-15 18:18:46.586280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.944 [2024-04-15 18:18:46.586443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.944 [2024-04-15 18:18:46.586472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.944 qpair failed and we were unable to recover it. 00:31:57.944 [2024-04-15 18:18:46.586638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.944 [2024-04-15 18:18:46.586804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.944 [2024-04-15 18:18:46.586833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.944 qpair failed and we were unable to recover it. 00:31:57.944 [2024-04-15 18:18:46.587003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.944 [2024-04-15 18:18:46.587168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.944 [2024-04-15 18:18:46.587197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.587398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.587539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.587567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.587700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.587861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.587889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.588025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.588211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.588240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.588399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.588564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.588592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.588756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.588891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.588919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.589091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.589255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.589283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.589423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.589562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.589591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.589783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.589972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.590001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.590298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.590507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.590536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.590768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.591087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.591116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.591311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.591619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.591647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.591925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.592085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.592114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.592303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.592435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.592464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.592657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.592790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.592818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.592981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.593143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.593172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.593367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.593532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.593561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.593719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.593907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.593936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.594166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.594361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.594390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.594563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.594780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.594809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.595031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.595231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.595261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.595539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.595811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.595839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.596053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.596204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.596233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.596423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.596610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.596639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.596841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.597024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.597053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.945 qpair failed and we were unable to recover it. 00:31:57.945 [2024-04-15 18:18:46.597262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.945 [2024-04-15 18:18:46.597480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.597508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.597840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.598168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.598197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.598470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.598735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.598763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.598959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.599124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.599153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.599556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.599747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.599776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.599943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.600135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.600164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.600327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.600515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.600544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.600760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.600954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.600982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.601210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.601402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.601430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.601629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.601790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.601819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.602012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.602214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.602244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.602458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.602639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.602667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.602995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.603162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.603191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.603381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.603595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.603624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.603851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.604014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.604042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.604298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.604444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.604472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.604667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.604925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.604953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.605281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.605601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.605630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.605917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.606158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.606187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.606389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.606578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.606606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.606820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.946 [2024-04-15 18:18:46.607013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.607041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.607221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.607381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.607409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.607581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.607746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.607774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.607931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.608104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.608133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.608298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.608462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.608491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.608649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.608791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.608819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.609011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.609208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.609237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.946 qpair failed and we were unable to recover it. 00:31:57.946 [2024-04-15 18:18:46.609437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.946 [2024-04-15 18:18:46.609599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.609627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.609815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.610002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.610030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.610256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.610454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.610485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.610685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.610879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.610907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.611172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.611386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.611415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.611644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.611784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.611812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.612021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.612223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.612252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.612533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.612703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.612732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.613018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.613234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.613263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.613430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.613621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.613649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.613844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.614013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.614041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.614215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.614409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.614437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.614638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.614802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.614830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.614998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.615165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.615194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.615354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.615523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.615551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.615747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.615906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.615934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.616127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.616286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.616315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.616481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.616684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.616713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.617026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.617255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.617284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.617563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.617779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.617807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.617986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.618202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.618231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.618415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.618594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.618622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.618831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.619023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.619051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.619284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.619555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.619583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.619784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.619948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.619976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.620174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.620317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.620346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.620538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.620670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.620698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.620890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.621052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.621096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.947 qpair failed and we were unable to recover it. 00:31:57.947 [2024-04-15 18:18:46.621236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.947 [2024-04-15 18:18:46.621414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.621442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.621609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.621807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.621836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.622076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.622246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.622275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.622463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.622658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.622687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.622881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.623095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.623125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.623268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.623445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.623473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.623641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.623842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.623871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.624053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.624252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.624281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.624453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.624636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.624664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.624969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.625203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.625232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.625475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.625693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.625721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.625919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.626089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.626118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.626308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.626476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.626504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.626697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.626861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.626890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.627052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.627251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.627279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.627485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.627630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.627659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.627880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.628041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.628080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.628249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.628411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.628439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.628632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.628824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.628853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.629046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.629232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.629262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.629482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.629652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.629680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.629871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.630073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.630102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.630237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.630429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.630457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.630671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.630841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.630869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.631122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.631353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.631382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.631519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.631683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.631711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.631996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.632351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.948 [2024-04-15 18:18:46.632404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.948 qpair failed and we were unable to recover it. 00:31:57.948 [2024-04-15 18:18:46.632654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.632867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.632896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.949 [2024-04-15 18:18:46.633127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.633271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.633299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.949 [2024-04-15 18:18:46.633492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.633686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.633715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.949 [2024-04-15 18:18:46.633959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.634171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.634201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.949 [2024-04-15 18:18:46.634422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.634623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.634652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.949 [2024-04-15 18:18:46.634864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.635051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.635089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.949 [2024-04-15 18:18:46.635339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.635508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.635537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.949 [2024-04-15 18:18:46.635780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.635991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.636019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.949 [2024-04-15 18:18:46.636329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.636605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.636634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.949 [2024-04-15 18:18:46.636883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.637093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.637127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.949 [2024-04-15 18:18:46.637373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.637585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.637613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.949 [2024-04-15 18:18:46.637799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.637974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.638002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.949 [2024-04-15 18:18:46.638163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.638363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.638392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.949 [2024-04-15 18:18:46.638577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.638769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.638798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.949 [2024-04-15 18:18:46.639046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.639252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.639281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.949 [2024-04-15 18:18:46.639452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.639634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.639662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.949 [2024-04-15 18:18:46.639931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.640202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.640232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.949 [2024-04-15 18:18:46.640478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.640693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.640721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.949 [2024-04-15 18:18:46.640894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.641082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.949 [2024-04-15 18:18:46.641112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.949 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.641422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.641721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.641749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.641920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.642103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.642132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.642439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.642654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.642682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.642933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.643155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.643184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.643405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.643632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.643660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.643835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.644056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.644091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.644117] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:57.950 [2024-04-15 18:18:46.644324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.644590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.644619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.644814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.644993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.645021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.645235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.645494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.645523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.645841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.646154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.646185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.646462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.646678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.646711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.646996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.647267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.647296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.647491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.647689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.647718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.647912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.648083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.648112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.648339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.648549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.648578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.648867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.649136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.649166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.649414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.649630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.649658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.649940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.650147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.650177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.650437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.650653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.650681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.650876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.651074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.651104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.651294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.651454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.651489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.651808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.652110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.652140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.652397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.652605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.652634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.652831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.653021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.653049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.653315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.653500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.653529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.950 qpair failed and we were unable to recover it. 00:31:57.950 [2024-04-15 18:18:46.653745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.950 [2024-04-15 18:18:46.653923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.653952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.654211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.654412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.654441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.654662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.654849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.654878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.655072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.655414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.655463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.655729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.655992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.656021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.656331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.656545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.656579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.656809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.657125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.657155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.657358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.657569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.657597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.657894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.658125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.658155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.658312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.658517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.658545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.658765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.658930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.658958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.659160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.659392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.659421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.659642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.659827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.659856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.660101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.660379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.660408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.660570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.660752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.660780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.660998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.661325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.661369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.661673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.661882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.661914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.662088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.662281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.662310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.662552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.662799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.662828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.663056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.663241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.663270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.663465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.663729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.663757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.664030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.664268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.664297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.664511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.664683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.664712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.664907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.665127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.665156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.665425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.665730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.665759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.666064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.666447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.666493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.666817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.667049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.667086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.951 [2024-04-15 18:18:46.667300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.667460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.951 [2024-04-15 18:18:46.667489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.951 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.667699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.667906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.667935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.668171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.668335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.668364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.668633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.668782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.668810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.669003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.669184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.669214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.669457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.669701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.669730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.669917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.670144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.670174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.670384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.670534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.670562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.670843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.671038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.671073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.671358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.671635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.671664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.671936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.672216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.672246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.672484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.672651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.672679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.672944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.673223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.673252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.673467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.673684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.673712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.673885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.674075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.674104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.674295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.674550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.674578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.674834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.675126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.675156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.675425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.675772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.675800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.676086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.676292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.676320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.676467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.676638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.676666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.676879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.677144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.677174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.677423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.677621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.677649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.677837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.678032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.678068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.678238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.678410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.678438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.678643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.678851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.678880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.679087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.679299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.679329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.679543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.679804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.679832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.952 qpair failed and we were unable to recover it. 00:31:57.952 [2024-04-15 18:18:46.680092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.680363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.952 [2024-04-15 18:18:46.680392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.680609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.680855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.680883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.681211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.681448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.681477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.681718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.681932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.681960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.682183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.682389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.682420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.682647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.682850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.682887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.683082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.683258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.683287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.683435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.683716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.683744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.683990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.684202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.684232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.684399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.684614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.684643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.684881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.685033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.685068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.685254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.685578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.685607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.685912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.686155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.686184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.686429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.686651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.686680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.686938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.687147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.687176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.687354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.687572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.687600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.687818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.687986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.688014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.688303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.688517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.688546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.688719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.688928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.688956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.689241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.689479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.689508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.689697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.689934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.689963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.690185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.690431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.690460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.690627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.690764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.690792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.691125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.691332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.691361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.691580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.691871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.691899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.692178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.692345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.692373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.692522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.692749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.692777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.692985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.693177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.693207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.953 qpair failed and we were unable to recover it. 00:31:57.953 [2024-04-15 18:18:46.693415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.953 [2024-04-15 18:18:46.693691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.693719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.693932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.694150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.694179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.694373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.694582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.694610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.694819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.695026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.695055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.695361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.695633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.695661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.695952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.696153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.696182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.696330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.696604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.696632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.696795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.697003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.697031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.697221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.697465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.697493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.697730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.697915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.697944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.698129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.698401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.698429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.698684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.698942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.698971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.699169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.699327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.699355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.699569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.699762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.699790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.699940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.700145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.700175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.700409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.700622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.700650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.700898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.701166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.701195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.701385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.701565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.701594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.701855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.702079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.702108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.702317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.702503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.702532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.702809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.703031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.703065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.703274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.703554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.703583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.954 [2024-04-15 18:18:46.703869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.704202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.954 [2024-04-15 18:18:46.704231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.954 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.704546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.704809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.704838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.705157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.705383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.705412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.705647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.705866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.705895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.706086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.706344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.706372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.706585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.706798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.706826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.707044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.707248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.707278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.707477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.707655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.707684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.707899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.708098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.708136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.708388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.708628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.708656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.708811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.709029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.709062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.709279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.709501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.709529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.709772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.709971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.710000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.710340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.710611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.710643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.710919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.711204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.711234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.711421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.711655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.711684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.711909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.712212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.712242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.712487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.712662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.712690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.712834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.713046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.713094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.713291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.713482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.713510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.713700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.713838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.713866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.714076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.714273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.714301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.714485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.714685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.714714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.714855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.715038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.715072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.715255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.715463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.715491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.715702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.715915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.715943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.716175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.716390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.716418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.716649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.716800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.716829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.955 [2024-04-15 18:18:46.717016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.717221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.955 [2024-04-15 18:18:46.717251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.955 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.717415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.717647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.717675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.717861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.718011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.718040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.718230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.718400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.718428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.718624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.718807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.718841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.719036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.719250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.719279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.719461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.719653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.719682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.719904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.720082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.720112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.720291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.720511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.720539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.720748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.720941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.720969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.721186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.721355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.721384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.721528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.721742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.721770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.721996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.722194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.722223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.722407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.722544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.722573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.722762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.722912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.722945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.723132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.723322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.723350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.723577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.723748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.723776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.724019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.724312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.724341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.724520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.724746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.724774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.724958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.725234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.725264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.725522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.725678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.725706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.725909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.726102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.726131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.726345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.726588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.726623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.726793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.726996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.727024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.727203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.727396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.727429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.727629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.727847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.727876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.956 [2024-04-15 18:18:46.728088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.728249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.956 [2024-04-15 18:18:46.728277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.956 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.728605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.728884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.728912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.729048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.729324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.729353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.729600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.729812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.729840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.730047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.730384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.730430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.730725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.730939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.730969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.731211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.731453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.731482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.731696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.732015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.732043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.732295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.732602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.732637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.732881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.733090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.733120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.733397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.733568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.733597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.733822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.734013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.734042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.734222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.734501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.734529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.734778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.735000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.735029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.735237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.735419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.735448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.735621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.735814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.735843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.736040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.736312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.736341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.736485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.736812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.736841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.737090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.737288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.737317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.737580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.737876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.737904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.738215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.738436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.738465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.738650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.738819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.738848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.739018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.739238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.739267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.739490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.739662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.739690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.739862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.740062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.740103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.740376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.740693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.740722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.740967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.741175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.741205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.741469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.741667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.741695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.957 [2024-04-15 18:18:46.742010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.742229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.957 [2024-04-15 18:18:46.742258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.957 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.742577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.742897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.742926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.743147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.743259] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:57.958 [2024-04-15 18:18:46.743298] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:57.958 [2024-04-15 18:18:46.743315] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:57.958 [2024-04-15 18:18:46.743329] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:57.958 [2024-04-15 18:18:46.743342] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:57.958 [2024-04-15 18:18:46.743359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.743387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.743604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.743580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:57.958 [2024-04-15 18:18:46.743639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:57.958 [2024-04-15 18:18:46.743737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.743765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.743737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:57.958 [2024-04-15 18:18:46.743742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:57.958 [2024-04-15 18:18:46.744054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.744275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.744304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.744475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.744661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.744690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.744944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.745212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.745242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.745408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.745579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.745608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.745893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.746164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.746198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.746406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.746625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.746654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.746820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.747038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.747079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.747250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.747487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.747516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.747763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.747957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.747986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.748202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.748386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.748415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.748628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.748847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.748881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.749134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.749320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.749348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.749612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.749816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.749844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.750031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.750230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.750260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.750533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.750730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.750763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.750944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.751178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.751208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.751434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.751597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.751626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.751807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.751989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.752017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.752241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.752408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.752437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.752678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.752861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.752889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.753073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.753262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.753290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.753517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.753778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.753806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.958 qpair failed and we were unable to recover it. 00:31:57.958 [2024-04-15 18:18:46.754089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.754368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.958 [2024-04-15 18:18:46.754400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.754578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.754761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.754790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.754999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.755213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.755255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.755530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.755738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.755767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.756021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.756219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.756248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.756481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.756665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.756694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.756912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.757084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.757115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.757331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.757504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.757533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.757711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.757895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.757924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.758137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.758377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.758406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.758686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.758894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.758923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.759232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.759513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.759542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.759756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.759923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.759957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.760167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.760387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.760416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.760579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.760745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.760774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.760986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.761247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.761277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.761614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.761891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.761919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.762234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.762497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.762525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.762720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.762905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.762934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.763152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.763337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.763366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.763553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.763738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.763766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.764054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.764331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.764360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.959 qpair failed and we were unable to recover it. 00:31:57.959 [2024-04-15 18:18:46.764690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.959 [2024-04-15 18:18:46.764898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.764933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.765248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.765504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.765533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.765725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.765887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.765916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.766090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.766273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.766302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.766442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.766611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.766640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.766781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.766995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.767023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.767278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.767518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.767546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.767782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.768025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.768054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.768282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.768473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.768502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.768654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.768833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.768861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.769012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.769204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.769234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.769453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.769588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.769620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.769835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.770011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.770039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.770284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.770558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.770587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.770788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.770957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.770985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.771154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.771327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.771356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.771554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.771761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.771789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.772122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.772345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.772374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.772589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.772750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.772778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.773001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.773157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.773186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.773487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.773699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.773728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.773917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.774069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.774099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.774375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.774584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.774612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.774758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.774904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.774933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.960 qpair failed and we were unable to recover it. 00:31:57.960 [2024-04-15 18:18:46.775141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.960 [2024-04-15 18:18:46.775337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.775366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.775576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.775765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.775793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.775975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.776158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.776188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.776351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.776493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.776522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.776759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.777046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.777080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.777274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.777517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.777546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.777755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.777940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.777969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.778162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.778373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.778402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.778623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.778777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.778806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.779052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.779245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.779275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.779490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.779675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.779704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.779914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.780125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.780155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.780341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.780552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.780580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.780738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.780934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.780963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.781176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.781362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.781391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.781578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.781761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.781790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.781956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.782185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.782215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.782451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.782663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.782692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.782946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.783207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.783237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.783483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.783701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.783729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.783878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.784050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.784086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.784262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.784476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.784505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.784714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.784934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.784963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.785133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.785393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.785422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.785633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.785826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.785854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.786088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.786254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.786283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.786465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.786644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.786673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.961 [2024-04-15 18:18:46.786941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.787201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.961 [2024-04-15 18:18:46.787230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.961 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.787417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.787677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.787705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.787891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.788073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.788104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.788257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.788440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.788469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.788720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.788903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.788931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.789118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.789307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.789336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.789551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.789771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.789799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.790020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.790241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.790269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.790454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.790649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.790678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.790826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.791053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.791088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.791270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.791446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.791475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.791718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.791891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.791919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.792161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.792372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.792402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.792661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.792869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.792898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.793102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.793345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.793374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.793572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.793720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.793755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.793948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.794139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.794168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.794382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.794567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.794595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.794751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.794947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.794975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.795167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.795359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.795388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.795684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.795871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.795899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.796132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.796284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.796313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.796459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.796639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.796667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.796823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.797013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.797041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.797274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.797454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.797483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.797652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.797821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.797859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.962 qpair failed and we were unable to recover it. 00:31:57.962 [2024-04-15 18:18:46.798036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.962 [2024-04-15 18:18:46.798245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.798275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.798423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.798622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.798651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.798812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.798997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.799026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.799224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.799408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.799436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.799742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.799982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.800011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.800191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.800401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.800430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.800613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.800814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.800843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.801014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.801206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.801235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.801455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.801662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.801690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.801849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.802035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.802072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.802248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.802419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.802448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.802626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.802806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.802834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.803040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.803213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.803242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.803456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.803647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.803675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.803900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.804146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.804176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.804363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.804552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.804580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.804769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.804976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.805005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.805178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.805344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.805372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.805572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.805791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.805820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.806036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.806223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.806252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.806468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.806678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.806706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.806901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.807052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.807113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.807293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.807525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.807554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.807803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.807957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.807985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.808126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.808361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.963 [2024-04-15 18:18:46.808390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.963 qpair failed and we were unable to recover it. 00:31:57.963 [2024-04-15 18:18:46.808604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.808804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.808833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.808998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.809183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.809213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.809423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.809603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.809631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.809810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.809999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.810028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.810209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.810345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.810374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.810564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.810764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.810792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.810985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.811195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.811225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.811474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.811759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.811788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.812054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.812240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.812269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.812497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.812686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.812715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.812984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.813241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.813271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.813456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.813644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.813672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.813924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.814192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.814222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.814434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.814645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.814673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.814836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.815041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.815076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.815239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.815413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.815441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.815616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.815759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.815788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.815978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.816171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.816201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.816343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.816535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.816563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.816759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.816943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.816976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.817193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.817375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.817410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.817645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.817828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.817857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.964 qpair failed and we were unable to recover it. 00:31:57.964 [2024-04-15 18:18:46.818009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.964 [2024-04-15 18:18:46.818198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.818227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.818485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.818751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.818780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.819013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.819173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.819202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.819401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.819568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.819597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.819792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.819928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.819956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.820180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.820347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.820376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.820577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.820743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.820772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.820960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.821142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.821177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.821395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.821668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.821697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.821946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.822122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.822152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.822362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.822637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.822666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.822915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.823142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.823172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.823389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.823601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.823630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.823852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.824013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.824041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.824225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.824462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.824491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.824681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.824880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.824917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.825108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.825269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.825297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.825503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.825681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.825715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.825896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.826103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.826133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.826310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.826464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.826493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.826702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.826915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.826944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.827169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.827354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.827383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.827551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.827761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.827789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.828009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.828178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.828208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.828395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.828589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.828618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.828829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.829007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.829035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.829253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.829390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.829418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.965 [2024-04-15 18:18:46.829598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.829823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.965 [2024-04-15 18:18:46.829856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.965 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.830074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.830283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.830313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.830495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.830668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.830697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.830886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.831084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.831114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.831275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.831441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.831470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.831696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.831859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.831888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.832054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.832230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.832258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.832443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.832616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.832644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.832820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.833001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.833029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.833234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.833455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.833484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.833665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.833819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.833848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.834040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.834224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.834254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.834430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.834572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.834601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.834747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.834905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.834934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.835155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.835321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.835360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.835540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.835752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.835780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.835918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.836073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.836102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.836309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.836500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.836529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.836715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.836964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.836992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.837232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.837456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.837484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.837674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.837833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.837861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.838075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.838256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.838284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.838436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.838576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.838605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.838760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.838970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.838998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.839190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.839410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.839439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.839610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.839802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.839831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.840108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.840299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.840328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.840495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.840664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.840710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.840980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.841205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.841235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.841449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.841628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.841657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.966 [2024-04-15 18:18:46.841864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.842083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.966 [2024-04-15 18:18:46.842113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.966 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.842341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.842513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.842559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.842735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.842914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.842943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.843130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.843263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.843292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.843461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.843706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.843743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.843986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.844209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.844239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.844492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.844683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.844730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.845027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.845202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.845231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.845426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.845585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.845614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.845797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.845940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.845969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.846194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.846326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.846359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.846525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.846724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.846752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.846942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.847109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.847138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.847325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.847536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.847564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.847782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.847952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.847980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.848153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.848374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.848421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.848622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.848849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.848894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.849090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.849275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.849304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.849514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.849702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.849757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.849967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.850160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.850190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.850342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.850566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.850612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.850851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.851041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.851088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.851250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.851470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.851499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.851703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.851878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.851923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.852097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.852280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.852309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.852488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.852695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.852739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.852964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.853130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.853159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.853391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.853657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.853709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.853891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.854106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.854135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.854356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.854587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.854634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.854806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.854960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.854988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.967 qpair failed and we were unable to recover it. 00:31:57.967 [2024-04-15 18:18:46.855187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.855338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.967 [2024-04-15 18:18:46.855367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.855524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.855739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.855767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.855976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.856163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.856192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.856335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.856509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.856537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.856769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.856900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.856929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.857124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.857303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.857330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.857519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.857691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.857720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.857988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.858198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.858228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.858371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.858537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.858566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.858745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.858955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.858984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.859200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.859356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.859384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.859570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.859776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.859805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.859945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.860106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.860135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.860309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.860514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.860542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.860719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.860901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.860930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.861111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.861332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.861361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.861553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.861741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.861769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.861977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.862156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.862185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.862382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.862530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.862558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.862820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.863012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.863040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.863260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.863410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.863439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.863641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.863792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.863821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.864006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.864224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.864254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.864416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.864583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.864611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.864816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.865038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.865075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.865272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.865476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.865505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.865652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.865873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.865901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.866110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.866276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.866304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.968 qpair failed and we were unable to recover it. 00:31:57.968 [2024-04-15 18:18:46.866475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.866790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.968 [2024-04-15 18:18:46.866819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.867000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.867175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.867204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.867388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.867578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.867607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.867789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.867943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.867972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.868155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.868304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.868332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.868514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.868651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.868679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.868912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.869064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.869093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.869280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.869536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.869565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.869749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.869918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.869945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.870108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.870271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.870300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.870588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.870785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.870813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.870994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.871159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.871187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.871358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.871557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.871586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.871764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.871982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.872010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.872226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.872389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.872418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.872624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.872876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.872905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.873091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.873304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.873332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.873496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.873661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.873689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.873872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.874132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.874162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.874355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.874505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.874533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.874685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.874859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.874887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.875079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.875247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.875276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.875474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.875678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.875718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.875925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.876144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.876173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.876348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.876531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.876560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.876770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.876907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.876935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.877144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.877331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.877359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.877623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.877940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.877969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.878227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.878424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.878453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.878590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.878875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.878904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.879138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.879328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.879356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.969 qpair failed and we were unable to recover it. 00:31:57.969 [2024-04-15 18:18:46.879564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.969 [2024-04-15 18:18:46.879731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.879758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.970 qpair failed and we were unable to recover it. 00:31:57.970 [2024-04-15 18:18:46.879982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.880137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.880171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.970 qpair failed and we were unable to recover it. 00:31:57.970 [2024-04-15 18:18:46.880363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.880532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.880561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.970 qpair failed and we were unable to recover it. 00:31:57.970 [2024-04-15 18:18:46.880713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.880897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.880926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.970 qpair failed and we were unable to recover it. 00:31:57.970 [2024-04-15 18:18:46.881111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.881286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.881315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.970 qpair failed and we were unable to recover it. 00:31:57.970 [2024-04-15 18:18:46.881493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.881640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.881669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.970 qpair failed and we were unable to recover it. 00:31:57.970 [2024-04-15 18:18:46.881873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.882015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.882043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72e4000b90 with addr=10.0.0.2, port=4420 00:31:57.970 qpair failed and we were unable to recover it. 00:31:57.970 [2024-04-15 18:18:46.882418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.882621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.882663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:57.970 qpair failed and we were unable to recover it. 00:31:57.970 [2024-04-15 18:18:46.882883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.883023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.883052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:57.970 qpair failed and we were unable to recover it. 00:31:57.970 [2024-04-15 18:18:46.883255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.883432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.883469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:57.970 qpair failed and we were unable to recover it. 00:31:57.970 [2024-04-15 18:18:46.883643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.883844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.883872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:57.970 qpair failed and we were unable to recover it. 00:31:57.970 [2024-04-15 18:18:46.884044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.884267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.884301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:57.970 qpair failed and we were unable to recover it. 00:31:57.970 [2024-04-15 18:18:46.884517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.884795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.884833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:57.970 qpair failed and we were unable to recover it. 00:31:57.970 [2024-04-15 18:18:46.885075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.885244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.970 [2024-04-15 18:18:46.885272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:57.970 qpair failed and we were unable to recover it. 00:31:58.232 [2024-04-15 18:18:46.885456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.232 [2024-04-15 18:18:46.885597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.232 [2024-04-15 18:18:46.885626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.232 qpair failed and we were unable to recover it. 00:31:58.232 [2024-04-15 18:18:46.885764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.232 [2024-04-15 18:18:46.885936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.232 [2024-04-15 18:18:46.885964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.232 qpair failed and we were unable to recover it. 00:31:58.232 [2024-04-15 18:18:46.886131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.232 18:18:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:58.232 [2024-04-15 18:18:46.886284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.232 [2024-04-15 18:18:46.886318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.232 qpair failed and we were unable to recover it. 00:31:58.232 18:18:46 -- common/autotest_common.sh@850 -- # return 0 00:31:58.232 [2024-04-15 18:18:46.886536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.232 18:18:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:58.232 [2024-04-15 18:18:46.886726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.232 [2024-04-15 18:18:46.886755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.232 qpair failed and we were unable to recover it. 00:31:58.232 18:18:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:58.232 [2024-04-15 18:18:46.886935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.232 18:18:46 -- common/autotest_common.sh@10 -- # set +x 00:31:58.232 [2024-04-15 18:18:46.887098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.232 [2024-04-15 18:18:46.887127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.232 qpair failed and we were unable to recover it. 00:31:58.232 [2024-04-15 18:18:46.887310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.232 [2024-04-15 18:18:46.887486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.232 [2024-04-15 18:18:46.887514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.232 qpair failed and we were unable to recover it. 00:31:58.232 [2024-04-15 18:18:46.887696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.232 [2024-04-15 18:18:46.887886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.232 [2024-04-15 18:18:46.887914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.888206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.888350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.888378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.888547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.888686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.888714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.888874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.889044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.889079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.889271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.889456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.889484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.889701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.889889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.889918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.890088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.890256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.890285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.890447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.890593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.890621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.890792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.890961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.890989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.891147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.891315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.891344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.891505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.891699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.891728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.891893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.892070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.892099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.892278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.892450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.892478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.892644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.892783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.892811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.893005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.893147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.893176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.893372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.893571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.893599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.893737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.893901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.893929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.894076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.894218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.894246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.894416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.894556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.894585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.894727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.894868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.894896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.895093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.895288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.895317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.895521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.895689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.895718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.895909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.896080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.896109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.896273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.896437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.896466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.896637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.896775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.896803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.896967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.897122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.897151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.897296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.897432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.897460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.897592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.897791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.897819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.897976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.898151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.898181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.233 qpair failed and we were unable to recover it. 00:31:58.233 [2024-04-15 18:18:46.898322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.898524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.233 [2024-04-15 18:18:46.898552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.898741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.898900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.898929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.899087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.899259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.899288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.899478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.899643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.899671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.899840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.899983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.900011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.900199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.900342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.900371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.900506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.900673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.900701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.900868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.901069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.901097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.901269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.901436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.901464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.901622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.901789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.901817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.901986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.902145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.902174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.902367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.902534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.902563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.902737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.902904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.902933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.903103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.903259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.903287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.903476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.903641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.903670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.903838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.904002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.904030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.904206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.904351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.904380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.904554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.904717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.904745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.904885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.905069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.905097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.905244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.905375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.905403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.905596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.905764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.905792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.905959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.906127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.906157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.906306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.906496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.906525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.906695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 18:18:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:58.234 [2024-04-15 18:18:46.906886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.906915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 18:18:46 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:58.234 [2024-04-15 18:18:46.907051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 18:18:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.234 [2024-04-15 18:18:46.907197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.907226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 18:18:46 -- common/autotest_common.sh@10 -- # set +x 00:31:58.234 [2024-04-15 18:18:46.907389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.907585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.907614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.907752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.907918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.907947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.908112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.908276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.908304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.234 [2024-04-15 18:18:46.908448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.908612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.234 [2024-04-15 18:18:46.908640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.234 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.908833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.908968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.908996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.909166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.909313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.909341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.909507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.909674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.909702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.909834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.910007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.910035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.910192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.910323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.910351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.910546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.910713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.910741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.910911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.911049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.911097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.911249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.911418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.911446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.911614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.911806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.911835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.912025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.912175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.912204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.912340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.912510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.912539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.912706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.912901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.912929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.913102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.913270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.913303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.913498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.913691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.913719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.913884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.914051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.914096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.914231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.914428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.914457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.914618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.914810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.914838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.914999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.915153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.915182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.915321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.915444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.915472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.915636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.915798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.915826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.915983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.916143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.916172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.916334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.916521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.916549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.916716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.916875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.916908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.917042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.917192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.917220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.917372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.917682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.917710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.917953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.918163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.918192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.918326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.918485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.918513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.918709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.918872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.918900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.919087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.919242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.235 [2024-04-15 18:18:46.919270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.235 qpair failed and we were unable to recover it. 00:31:58.235 [2024-04-15 18:18:46.919458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.919640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.919667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.919889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.920055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.920088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.920236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.920424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.920451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.920661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.920846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.920879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.921087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.921234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.921263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.921451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.921657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.921696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.921906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.922051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.922097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.922245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.922384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.922413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.922594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.922884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.922912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.923114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.923263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.923291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.923478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.923645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.923672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.923863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.924053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.924086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.924235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.924394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.924422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.924599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.924780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.924825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.925052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.925220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.925248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.925468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.925663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.925694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.925880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.926075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.926104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.926234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.926387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.926423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.926623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.926791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.926819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.926962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.927168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.927197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.927342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.927508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.927536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.927715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.927904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.927932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.928135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.928358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.928386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.928604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.928743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.928771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.929017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.929184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.929213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.929372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.929568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.929596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.929762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.929955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.929983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.930170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.930312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.930340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 [2024-04-15 18:18:46.930508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.930667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.236 [2024-04-15 18:18:46.930705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.236 qpair failed and we were unable to recover it. 00:31:58.236 Malloc0 00:31:58.236 [2024-04-15 18:18:46.930916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.931143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.931173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 18:18:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.237 [2024-04-15 18:18:46.931358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 18:18:46 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:58.237 [2024-04-15 18:18:46.931539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.931567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 18:18:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.237 18:18:46 -- common/autotest_common.sh@10 -- # set +x 00:31:58.237 [2024-04-15 18:18:46.931743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.931951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.931980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.932172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.932315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.932344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.932538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.932707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.932735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.932895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.933103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.933133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.933280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.933440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.933468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.933603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.933772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.933800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.934025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.934196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.934225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.934422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.934560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.934588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 [2024-04-15 18:18:46.934578] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.934786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.934991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.935019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.935215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.935433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.935461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.935639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.935892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.935920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.936136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.936342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.936370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.936554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.936771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.936800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.936949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.937142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.937171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.937314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.937519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.937547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.937704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.937888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.937917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.938118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.938287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.938315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.938483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.938618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.938647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.938811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.939002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.939030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.939199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.939390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.939418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.939610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.939742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.237 [2024-04-15 18:18:46.939770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.237 qpair failed and we were unable to recover it. 00:31:58.237 [2024-04-15 18:18:46.939927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.940123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.940151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.940320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.940507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.940535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.940703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.940836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.940864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.941066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.941199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.941228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.941361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.941547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.941575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.941756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.941933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.941962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.942128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.942316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.942344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.942552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.942746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.942775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 18:18:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.238 [2024-04-15 18:18:46.942965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 18:18:46 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:58.238 [2024-04-15 18:18:46.943129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 18:18:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.238 [2024-04-15 18:18:46.943158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 18:18:46 -- common/autotest_common.sh@10 -- # set +x 00:31:58.238 [2024-04-15 18:18:46.943355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.943517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.943545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.943735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.943907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.943936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.944127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.944303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.944331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.944462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.944656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.944685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.944864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.945027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.945055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.945205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.945396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.945424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.945638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.945859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.945887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.946065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.946268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.946307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.946508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.946697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.946725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.946925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.947123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.947152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.947312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.947476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.947504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.947697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.947839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.947867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.948093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.948289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.948318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.948485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.948647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.948675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.948853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.949036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.949071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.949215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.949406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.949434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.949603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.949795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.949823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.949987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.950174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.950203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.950345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.950536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.238 [2024-04-15 18:18:46.950564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.238 qpair failed and we were unable to recover it. 00:31:58.238 [2024-04-15 18:18:46.950762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 18:18:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.239 [2024-04-15 18:18:46.950925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.950954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 18:18:46 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:58.239 [2024-04-15 18:18:46.951124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 18:18:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.239 18:18:46 -- common/autotest_common.sh@10 -- # set +x 00:31:58.239 [2024-04-15 18:18:46.951287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.951315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.951477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.951652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.951680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.951897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.952090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.952118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.952278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.952441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.952469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.952654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.952822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.952850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.953044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.953216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.953244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.953397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.953561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.953589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.953756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.953917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.953946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.954133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.954324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.954352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.954511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.954675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.954703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.954879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.955084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.955113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.955296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.955519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.955548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.955733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.955894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.955922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.956113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.956282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.956310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.956501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.956699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.956728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.956897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.957049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.957084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.957275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.957414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.957443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.957575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.957731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.957768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.957905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.958073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.958110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.958278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.958489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.958517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.958725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 18:18:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.239 [2024-04-15 18:18:46.958908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.958936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 18:18:46 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:58.239 [2024-04-15 18:18:46.959130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 18:18:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.239 [2024-04-15 18:18:46.959301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 18:18:46 -- common/autotest_common.sh@10 -- # set +x 00:31:58.239 [2024-04-15 18:18:46.959329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.959499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.959658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.959686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.959861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.960030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.960064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.960202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.960367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.960396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.960526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.960661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.960690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.239 qpair failed and we were unable to recover it. 00:31:58.239 [2024-04-15 18:18:46.960881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.239 [2024-04-15 18:18:46.961117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.240 [2024-04-15 18:18:46.961146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.240 qpair failed and we were unable to recover it. 00:31:58.240 [2024-04-15 18:18:46.961334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.240 [2024-04-15 18:18:46.961517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.240 [2024-04-15 18:18:46.961546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.240 qpair failed and we were unable to recover it. 00:31:58.240 [2024-04-15 18:18:46.961704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.240 [2024-04-15 18:18:46.961880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.240 [2024-04-15 18:18:46.961908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.240 qpair failed and we were unable to recover it. 00:31:58.240 [2024-04-15 18:18:46.962093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.240 [2024-04-15 18:18:46.962260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.240 [2024-04-15 18:18:46.962289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.240 qpair failed and we were unable to recover it. 00:31:58.240 [2024-04-15 18:18:46.962501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.240 [2024-04-15 18:18:46.962640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.240 [2024-04-15 18:18:46.962669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72ec000b90 with addr=10.0.0.2, port=4420 00:31:58.240 qpair failed and we were unable to recover it. 00:31:58.240 [2024-04-15 18:18:46.962814] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:58.240 [2024-04-15 18:18:46.965355] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.240 [2024-04-15 18:18:46.965592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.240 [2024-04-15 18:18:46.965624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.240 [2024-04-15 18:18:46.965641] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.240 [2024-04-15 18:18:46.965656] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.240 [2024-04-15 18:18:46.965705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.240 qpair failed and we were unable to recover it. 00:31:58.240 18:18:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.240 18:18:46 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:58.240 18:18:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.240 18:18:46 -- common/autotest_common.sh@10 -- # set +x 00:31:58.240 18:18:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.240 18:18:46 -- host/target_disconnect.sh@58 -- # wait 3462407 00:31:58.240 [2024-04-15 18:18:46.975325] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.240 [2024-04-15 18:18:46.975486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.240 [2024-04-15 18:18:46.975517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.240 [2024-04-15 18:18:46.975534] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.240 [2024-04-15 18:18:46.975548] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.240 [2024-04-15 18:18:46.975582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.240 qpair failed and we were unable to recover it. 00:31:58.240 [2024-04-15 18:18:46.985284] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.240 [2024-04-15 18:18:46.985469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.240 [2024-04-15 18:18:46.985498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.240 [2024-04-15 18:18:46.985515] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.240 [2024-04-15 18:18:46.985529] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.240 [2024-04-15 18:18:46.985562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.240 qpair failed and we were unable to recover it. 00:31:58.240 [2024-04-15 18:18:46.995260] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.240 [2024-04-15 18:18:46.995406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.240 [2024-04-15 18:18:46.995436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.240 [2024-04-15 18:18:46.995459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.240 [2024-04-15 18:18:46.995474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.240 [2024-04-15 18:18:46.995507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.240 qpair failed and we were unable to recover it. 00:31:58.240 [2024-04-15 18:18:47.005267] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.240 [2024-04-15 18:18:47.005425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.240 [2024-04-15 18:18:47.005454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.240 [2024-04-15 18:18:47.005471] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.240 [2024-04-15 18:18:47.005484] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.240 [2024-04-15 18:18:47.005518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.240 qpair failed and we were unable to recover it. 00:31:58.240 [2024-04-15 18:18:47.015264] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.240 [2024-04-15 18:18:47.015414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.240 [2024-04-15 18:18:47.015443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.240 [2024-04-15 18:18:47.015459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.240 [2024-04-15 18:18:47.015474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.240 [2024-04-15 18:18:47.015507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.240 qpair failed and we were unable to recover it. 00:31:58.240 [2024-04-15 18:18:47.025248] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.240 [2024-04-15 18:18:47.025406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.240 [2024-04-15 18:18:47.025435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.240 [2024-04-15 18:18:47.025452] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.240 [2024-04-15 18:18:47.025466] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.240 [2024-04-15 18:18:47.025499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.240 qpair failed and we were unable to recover it. 00:31:58.240 [2024-04-15 18:18:47.035255] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.240 [2024-04-15 18:18:47.035439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.240 [2024-04-15 18:18:47.035469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.240 [2024-04-15 18:18:47.035485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.240 [2024-04-15 18:18:47.035499] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.240 [2024-04-15 18:18:47.035532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.240 qpair failed and we were unable to recover it. 00:31:58.240 [2024-04-15 18:18:47.045291] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.240 [2024-04-15 18:18:47.045460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.240 [2024-04-15 18:18:47.045489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.240 [2024-04-15 18:18:47.045506] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.240 [2024-04-15 18:18:47.045519] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.240 [2024-04-15 18:18:47.045552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.240 qpair failed and we were unable to recover it. 00:31:58.240 [2024-04-15 18:18:47.055344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.240 [2024-04-15 18:18:47.055480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.240 [2024-04-15 18:18:47.055509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.240 [2024-04-15 18:18:47.055525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.240 [2024-04-15 18:18:47.055540] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.240 [2024-04-15 18:18:47.055573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.240 qpair failed and we were unable to recover it. 00:31:58.240 [2024-04-15 18:18:47.065348] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.240 [2024-04-15 18:18:47.065517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.240 [2024-04-15 18:18:47.065545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.240 [2024-04-15 18:18:47.065561] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.240 [2024-04-15 18:18:47.065576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.241 [2024-04-15 18:18:47.065609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.241 qpair failed and we were unable to recover it. 00:31:58.241 [2024-04-15 18:18:47.075391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.241 [2024-04-15 18:18:47.075538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.241 [2024-04-15 18:18:47.075566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.241 [2024-04-15 18:18:47.075582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.241 [2024-04-15 18:18:47.075595] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.241 [2024-04-15 18:18:47.075629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.241 qpair failed and we were unable to recover it. 00:31:58.241 [2024-04-15 18:18:47.085515] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.241 [2024-04-15 18:18:47.085661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.241 [2024-04-15 18:18:47.085695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.241 [2024-04-15 18:18:47.085712] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.241 [2024-04-15 18:18:47.085726] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.241 [2024-04-15 18:18:47.085759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.241 qpair failed and we were unable to recover it. 00:31:58.241 [2024-04-15 18:18:47.095456] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.241 [2024-04-15 18:18:47.095590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.241 [2024-04-15 18:18:47.095618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.241 [2024-04-15 18:18:47.095635] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.241 [2024-04-15 18:18:47.095648] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.241 [2024-04-15 18:18:47.095681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.241 qpair failed and we were unable to recover it. 00:31:58.241 [2024-04-15 18:18:47.105464] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.241 [2024-04-15 18:18:47.105603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.241 [2024-04-15 18:18:47.105632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.241 [2024-04-15 18:18:47.105648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.241 [2024-04-15 18:18:47.105662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.241 [2024-04-15 18:18:47.105694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.241 qpair failed and we were unable to recover it. 00:31:58.241 [2024-04-15 18:18:47.115466] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.241 [2024-04-15 18:18:47.115617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.241 [2024-04-15 18:18:47.115646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.241 [2024-04-15 18:18:47.115662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.241 [2024-04-15 18:18:47.115676] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.241 [2024-04-15 18:18:47.115709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.241 qpair failed and we were unable to recover it. 00:31:58.241 [2024-04-15 18:18:47.125520] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.241 [2024-04-15 18:18:47.125673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.241 [2024-04-15 18:18:47.125701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.241 [2024-04-15 18:18:47.125717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.241 [2024-04-15 18:18:47.125731] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.241 [2024-04-15 18:18:47.125770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.241 qpair failed and we were unable to recover it. 00:31:58.241 [2024-04-15 18:18:47.135553] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.241 [2024-04-15 18:18:47.135733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.241 [2024-04-15 18:18:47.135760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.241 [2024-04-15 18:18:47.135777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.241 [2024-04-15 18:18:47.135791] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.241 [2024-04-15 18:18:47.135823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.241 qpair failed and we were unable to recover it. 00:31:58.241 [2024-04-15 18:18:47.145576] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.241 [2024-04-15 18:18:47.145716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.241 [2024-04-15 18:18:47.145744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.241 [2024-04-15 18:18:47.145760] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.241 [2024-04-15 18:18:47.145774] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.241 [2024-04-15 18:18:47.145807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.241 qpair failed and we were unable to recover it. 00:31:58.241 [2024-04-15 18:18:47.155592] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.241 [2024-04-15 18:18:47.155733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.241 [2024-04-15 18:18:47.155762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.241 [2024-04-15 18:18:47.155778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.241 [2024-04-15 18:18:47.155792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.241 [2024-04-15 18:18:47.155825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.241 qpair failed and we were unable to recover it. 00:31:58.241 [2024-04-15 18:18:47.165621] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.241 [2024-04-15 18:18:47.165764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.241 [2024-04-15 18:18:47.165792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.241 [2024-04-15 18:18:47.165808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.241 [2024-04-15 18:18:47.165823] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.241 [2024-04-15 18:18:47.165856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.241 qpair failed and we were unable to recover it. 00:31:58.241 [2024-04-15 18:18:47.175649] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.241 [2024-04-15 18:18:47.175797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.241 [2024-04-15 18:18:47.175830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.241 [2024-04-15 18:18:47.175847] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.241 [2024-04-15 18:18:47.175861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.241 [2024-04-15 18:18:47.175894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.241 qpair failed and we were unable to recover it. 00:31:58.501 [2024-04-15 18:18:47.185749] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.501 [2024-04-15 18:18:47.185892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.501 [2024-04-15 18:18:47.185922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.501 [2024-04-15 18:18:47.185938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.501 [2024-04-15 18:18:47.185952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.501 [2024-04-15 18:18:47.185985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.501 qpair failed and we were unable to recover it. 00:31:58.501 [2024-04-15 18:18:47.195721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.501 [2024-04-15 18:18:47.195863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.501 [2024-04-15 18:18:47.195891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.501 [2024-04-15 18:18:47.195908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.501 [2024-04-15 18:18:47.195922] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.501 [2024-04-15 18:18:47.195955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.501 qpair failed and we were unable to recover it. 00:31:58.501 [2024-04-15 18:18:47.205765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.501 [2024-04-15 18:18:47.205940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.501 [2024-04-15 18:18:47.205969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.501 [2024-04-15 18:18:47.205987] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.501 [2024-04-15 18:18:47.206002] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.501 [2024-04-15 18:18:47.206035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.501 qpair failed and we were unable to recover it. 00:31:58.501 [2024-04-15 18:18:47.215786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.501 [2024-04-15 18:18:47.215949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.501 [2024-04-15 18:18:47.215977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.501 [2024-04-15 18:18:47.215993] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.501 [2024-04-15 18:18:47.216007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.501 [2024-04-15 18:18:47.216047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.501 qpair failed and we were unable to recover it. 00:31:58.501 [2024-04-15 18:18:47.225797] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.501 [2024-04-15 18:18:47.225957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.502 [2024-04-15 18:18:47.225985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.502 [2024-04-15 18:18:47.226001] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.502 [2024-04-15 18:18:47.226015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.502 [2024-04-15 18:18:47.226047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.502 qpair failed and we were unable to recover it. 00:31:58.502 [2024-04-15 18:18:47.235818] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.502 [2024-04-15 18:18:47.235959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.502 [2024-04-15 18:18:47.235986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.502 [2024-04-15 18:18:47.236002] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.502 [2024-04-15 18:18:47.236017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.502 [2024-04-15 18:18:47.236049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.502 qpair failed and we were unable to recover it. 00:31:58.502 [2024-04-15 18:18:47.245879] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.502 [2024-04-15 18:18:47.246021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.502 [2024-04-15 18:18:47.246048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.502 [2024-04-15 18:18:47.246073] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.502 [2024-04-15 18:18:47.246088] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.502 [2024-04-15 18:18:47.246121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.502 qpair failed and we were unable to recover it. 00:31:58.502 [2024-04-15 18:18:47.256013] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.502 [2024-04-15 18:18:47.256158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.502 [2024-04-15 18:18:47.256186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.502 [2024-04-15 18:18:47.256202] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.502 [2024-04-15 18:18:47.256216] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.502 [2024-04-15 18:18:47.256249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.502 qpair failed and we were unable to recover it. 00:31:58.502 [2024-04-15 18:18:47.265917] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.502 [2024-04-15 18:18:47.266065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.502 [2024-04-15 18:18:47.266094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.502 [2024-04-15 18:18:47.266110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.502 [2024-04-15 18:18:47.266124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.502 [2024-04-15 18:18:47.266157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.502 qpair failed and we were unable to recover it. 00:31:58.502 [2024-04-15 18:18:47.275926] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.502 [2024-04-15 18:18:47.276076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.502 [2024-04-15 18:18:47.276105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.502 [2024-04-15 18:18:47.276121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.502 [2024-04-15 18:18:47.276135] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.502 [2024-04-15 18:18:47.276168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.502 qpair failed and we were unable to recover it. 00:31:58.502 [2024-04-15 18:18:47.285969] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.502 [2024-04-15 18:18:47.286135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.502 [2024-04-15 18:18:47.286164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.502 [2024-04-15 18:18:47.286180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.502 [2024-04-15 18:18:47.286194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.502 [2024-04-15 18:18:47.286227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.502 qpair failed and we were unable to recover it. 00:31:58.502 [2024-04-15 18:18:47.296044] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.502 [2024-04-15 18:18:47.296183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.502 [2024-04-15 18:18:47.296211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.502 [2024-04-15 18:18:47.296227] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.502 [2024-04-15 18:18:47.296241] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.502 [2024-04-15 18:18:47.296275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.502 qpair failed and we were unable to recover it. 00:31:58.502 [2024-04-15 18:18:47.306068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.502 [2024-04-15 18:18:47.306241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.502 [2024-04-15 18:18:47.306269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.502 [2024-04-15 18:18:47.306285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.502 [2024-04-15 18:18:47.306306] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.502 [2024-04-15 18:18:47.306340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.502 qpair failed and we were unable to recover it. 00:31:58.502 [2024-04-15 18:18:47.316154] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.502 [2024-04-15 18:18:47.316295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.502 [2024-04-15 18:18:47.316322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.502 [2024-04-15 18:18:47.316339] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.502 [2024-04-15 18:18:47.316353] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.502 [2024-04-15 18:18:47.316386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.502 qpair failed and we were unable to recover it. 00:31:58.502 [2024-04-15 18:18:47.326087] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.502 [2024-04-15 18:18:47.326268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.502 [2024-04-15 18:18:47.326295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.502 [2024-04-15 18:18:47.326312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.502 [2024-04-15 18:18:47.326326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.502 [2024-04-15 18:18:47.326359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.502 qpair failed and we were unable to recover it. 00:31:58.502 [2024-04-15 18:18:47.336133] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.502 [2024-04-15 18:18:47.336267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.502 [2024-04-15 18:18:47.336295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.502 [2024-04-15 18:18:47.336311] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.502 [2024-04-15 18:18:47.336325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.502 [2024-04-15 18:18:47.336358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.502 qpair failed and we were unable to recover it. 00:31:58.502 [2024-04-15 18:18:47.346151] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.503 [2024-04-15 18:18:47.346293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.503 [2024-04-15 18:18:47.346322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.503 [2024-04-15 18:18:47.346338] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.503 [2024-04-15 18:18:47.346352] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.503 [2024-04-15 18:18:47.346384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.503 qpair failed and we were unable to recover it. 00:31:58.503 [2024-04-15 18:18:47.356230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.503 [2024-04-15 18:18:47.356371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.503 [2024-04-15 18:18:47.356399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.503 [2024-04-15 18:18:47.356416] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.503 [2024-04-15 18:18:47.356429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.503 [2024-04-15 18:18:47.356462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.503 qpair failed and we were unable to recover it. 00:31:58.503 [2024-04-15 18:18:47.366283] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.503 [2024-04-15 18:18:47.366415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.503 [2024-04-15 18:18:47.366444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.503 [2024-04-15 18:18:47.366460] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.503 [2024-04-15 18:18:47.366474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.503 [2024-04-15 18:18:47.366507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.503 qpair failed and we were unable to recover it. 00:31:58.503 [2024-04-15 18:18:47.376373] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.503 [2024-04-15 18:18:47.376521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.503 [2024-04-15 18:18:47.376548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.503 [2024-04-15 18:18:47.376564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.503 [2024-04-15 18:18:47.376579] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.503 [2024-04-15 18:18:47.376611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.503 qpair failed and we were unable to recover it. 00:31:58.503 [2024-04-15 18:18:47.386304] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.503 [2024-04-15 18:18:47.386458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.503 [2024-04-15 18:18:47.386486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.503 [2024-04-15 18:18:47.386502] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.503 [2024-04-15 18:18:47.386516] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.503 [2024-04-15 18:18:47.386549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.503 qpair failed and we were unable to recover it. 00:31:58.503 [2024-04-15 18:18:47.396360] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.503 [2024-04-15 18:18:47.396503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.503 [2024-04-15 18:18:47.396530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.503 [2024-04-15 18:18:47.396552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.503 [2024-04-15 18:18:47.396567] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.503 [2024-04-15 18:18:47.396600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.503 qpair failed and we were unable to recover it. 00:31:58.503 [2024-04-15 18:18:47.406358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.503 [2024-04-15 18:18:47.406499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.503 [2024-04-15 18:18:47.406527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.503 [2024-04-15 18:18:47.406543] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.503 [2024-04-15 18:18:47.406557] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.503 [2024-04-15 18:18:47.406590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.503 qpair failed and we were unable to recover it. 00:31:58.503 [2024-04-15 18:18:47.416435] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.503 [2024-04-15 18:18:47.416565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.503 [2024-04-15 18:18:47.416593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.503 [2024-04-15 18:18:47.416610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.503 [2024-04-15 18:18:47.416623] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.503 [2024-04-15 18:18:47.416656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.503 qpair failed and we were unable to recover it. 00:31:58.503 [2024-04-15 18:18:47.426498] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.503 [2024-04-15 18:18:47.426677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.503 [2024-04-15 18:18:47.426705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.503 [2024-04-15 18:18:47.426720] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.503 [2024-04-15 18:18:47.426734] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.503 [2024-04-15 18:18:47.426767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.503 qpair failed and we were unable to recover it. 00:31:58.503 [2024-04-15 18:18:47.436461] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.503 [2024-04-15 18:18:47.436610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.503 [2024-04-15 18:18:47.436637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.503 [2024-04-15 18:18:47.436653] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.503 [2024-04-15 18:18:47.436667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.503 [2024-04-15 18:18:47.436699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.503 qpair failed and we were unable to recover it. 00:31:58.503 [2024-04-15 18:18:47.446469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.504 [2024-04-15 18:18:47.446607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.504 [2024-04-15 18:18:47.446635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.504 [2024-04-15 18:18:47.446651] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.504 [2024-04-15 18:18:47.446665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.504 [2024-04-15 18:18:47.446697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.504 qpair failed and we were unable to recover it. 00:31:58.763 [2024-04-15 18:18:47.456489] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.763 [2024-04-15 18:18:47.456641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.763 [2024-04-15 18:18:47.456670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.763 [2024-04-15 18:18:47.456687] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.763 [2024-04-15 18:18:47.456702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.763 [2024-04-15 18:18:47.456736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.763 qpair failed and we were unable to recover it. 00:31:58.763 [2024-04-15 18:18:47.466526] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.763 [2024-04-15 18:18:47.466667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.763 [2024-04-15 18:18:47.466698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.763 [2024-04-15 18:18:47.466714] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.763 [2024-04-15 18:18:47.466728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.763 [2024-04-15 18:18:47.466761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.763 qpair failed and we were unable to recover it. 00:31:58.763 [2024-04-15 18:18:47.476584] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.763 [2024-04-15 18:18:47.476727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.763 [2024-04-15 18:18:47.476755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.763 [2024-04-15 18:18:47.476771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.763 [2024-04-15 18:18:47.476786] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.763 [2024-04-15 18:18:47.476819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.763 qpair failed and we were unable to recover it. 00:31:58.763 [2024-04-15 18:18:47.486602] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.763 [2024-04-15 18:18:47.486743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.763 [2024-04-15 18:18:47.486777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.763 [2024-04-15 18:18:47.486794] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.763 [2024-04-15 18:18:47.486809] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.763 [2024-04-15 18:18:47.486842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.763 qpair failed and we were unable to recover it. 00:31:58.763 [2024-04-15 18:18:47.496619] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.763 [2024-04-15 18:18:47.496790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.763 [2024-04-15 18:18:47.496818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.763 [2024-04-15 18:18:47.496834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.763 [2024-04-15 18:18:47.496848] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.763 [2024-04-15 18:18:47.496881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.763 qpair failed and we were unable to recover it. 00:31:58.763 [2024-04-15 18:18:47.506640] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.763 [2024-04-15 18:18:47.506784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.763 [2024-04-15 18:18:47.506812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.763 [2024-04-15 18:18:47.506828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.763 [2024-04-15 18:18:47.506843] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.763 [2024-04-15 18:18:47.506875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.763 qpair failed and we were unable to recover it. 00:31:58.763 [2024-04-15 18:18:47.516680] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.763 [2024-04-15 18:18:47.516867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.763 [2024-04-15 18:18:47.516894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.763 [2024-04-15 18:18:47.516911] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.763 [2024-04-15 18:18:47.516925] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.763 [2024-04-15 18:18:47.516958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.763 qpair failed and we were unable to recover it. 00:31:58.763 [2024-04-15 18:18:47.526754] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.763 [2024-04-15 18:18:47.526885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.763 [2024-04-15 18:18:47.526913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.764 [2024-04-15 18:18:47.526931] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.764 [2024-04-15 18:18:47.526945] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.764 [2024-04-15 18:18:47.526985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.764 qpair failed and we were unable to recover it. 00:31:58.764 [2024-04-15 18:18:47.536731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.764 [2024-04-15 18:18:47.536864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.764 [2024-04-15 18:18:47.536892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.764 [2024-04-15 18:18:47.536909] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.764 [2024-04-15 18:18:47.536923] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.764 [2024-04-15 18:18:47.536957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.764 qpair failed and we were unable to recover it. 00:31:58.764 [2024-04-15 18:18:47.546740] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.764 [2024-04-15 18:18:47.546890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.764 [2024-04-15 18:18:47.546919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.764 [2024-04-15 18:18:47.546935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.764 [2024-04-15 18:18:47.546949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.764 [2024-04-15 18:18:47.546982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.764 qpair failed and we were unable to recover it. 00:31:58.764 [2024-04-15 18:18:47.556825] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.764 [2024-04-15 18:18:47.557005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.764 [2024-04-15 18:18:47.557033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.764 [2024-04-15 18:18:47.557049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.764 [2024-04-15 18:18:47.557076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.764 [2024-04-15 18:18:47.557111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.764 qpair failed and we were unable to recover it. 00:31:58.764 [2024-04-15 18:18:47.566861] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.764 [2024-04-15 18:18:47.567001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.764 [2024-04-15 18:18:47.567028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.764 [2024-04-15 18:18:47.567044] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.764 [2024-04-15 18:18:47.567067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.764 [2024-04-15 18:18:47.567103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.764 qpair failed and we were unable to recover it. 00:31:58.764 [2024-04-15 18:18:47.576938] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.764 [2024-04-15 18:18:47.577089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.764 [2024-04-15 18:18:47.577123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.764 [2024-04-15 18:18:47.577141] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.764 [2024-04-15 18:18:47.577155] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.764 [2024-04-15 18:18:47.577188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.764 qpair failed and we were unable to recover it. 00:31:58.764 [2024-04-15 18:18:47.586875] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.764 [2024-04-15 18:18:47.587026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.764 [2024-04-15 18:18:47.587055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.764 [2024-04-15 18:18:47.587081] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.764 [2024-04-15 18:18:47.587096] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.764 [2024-04-15 18:18:47.587128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.764 qpair failed and we were unable to recover it. 00:31:58.764 [2024-04-15 18:18:47.596940] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.764 [2024-04-15 18:18:47.597108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.764 [2024-04-15 18:18:47.597136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.764 [2024-04-15 18:18:47.597152] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.764 [2024-04-15 18:18:47.597167] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.764 [2024-04-15 18:18:47.597200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.764 qpair failed and we were unable to recover it. 00:31:58.764 [2024-04-15 18:18:47.606949] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.764 [2024-04-15 18:18:47.607091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.764 [2024-04-15 18:18:47.607119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.764 [2024-04-15 18:18:47.607135] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.764 [2024-04-15 18:18:47.607149] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.764 [2024-04-15 18:18:47.607183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.764 qpair failed and we were unable to recover it. 00:31:58.764 [2024-04-15 18:18:47.616957] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.764 [2024-04-15 18:18:47.617106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.764 [2024-04-15 18:18:47.617136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.764 [2024-04-15 18:18:47.617152] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.764 [2024-04-15 18:18:47.617166] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.764 [2024-04-15 18:18:47.617206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.764 qpair failed and we were unable to recover it. 00:31:58.764 [2024-04-15 18:18:47.626984] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.764 [2024-04-15 18:18:47.627138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.764 [2024-04-15 18:18:47.627168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.764 [2024-04-15 18:18:47.627184] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.764 [2024-04-15 18:18:47.627198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.764 [2024-04-15 18:18:47.627231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.764 qpair failed and we were unable to recover it. 00:31:58.764 [2024-04-15 18:18:47.637050] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.764 [2024-04-15 18:18:47.637215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.764 [2024-04-15 18:18:47.637243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.764 [2024-04-15 18:18:47.637260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.764 [2024-04-15 18:18:47.637274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.764 [2024-04-15 18:18:47.637308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.764 qpair failed and we were unable to recover it. 00:31:58.764 [2024-04-15 18:18:47.647039] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.764 [2024-04-15 18:18:47.647179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.764 [2024-04-15 18:18:47.647208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.764 [2024-04-15 18:18:47.647225] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.764 [2024-04-15 18:18:47.647239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.764 [2024-04-15 18:18:47.647273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.764 qpair failed and we were unable to recover it. 00:31:58.764 [2024-04-15 18:18:47.657082] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.764 [2024-04-15 18:18:47.657216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.764 [2024-04-15 18:18:47.657244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.764 [2024-04-15 18:18:47.657260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.764 [2024-04-15 18:18:47.657274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.765 [2024-04-15 18:18:47.657307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.765 qpair failed and we were unable to recover it. 00:31:58.765 [2024-04-15 18:18:47.667151] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.765 [2024-04-15 18:18:47.667306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.765 [2024-04-15 18:18:47.667343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.765 [2024-04-15 18:18:47.667361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.765 [2024-04-15 18:18:47.667375] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.765 [2024-04-15 18:18:47.667408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.765 qpair failed and we were unable to recover it. 00:31:58.765 [2024-04-15 18:18:47.677172] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.765 [2024-04-15 18:18:47.677373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.765 [2024-04-15 18:18:47.677402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.765 [2024-04-15 18:18:47.677418] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.765 [2024-04-15 18:18:47.677432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.765 [2024-04-15 18:18:47.677464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.765 qpair failed and we were unable to recover it. 00:31:58.765 [2024-04-15 18:18:47.687214] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.765 [2024-04-15 18:18:47.687375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.765 [2024-04-15 18:18:47.687404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.765 [2024-04-15 18:18:47.687420] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.765 [2024-04-15 18:18:47.687434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.765 [2024-04-15 18:18:47.687467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.765 qpair failed and we were unable to recover it. 00:31:58.765 [2024-04-15 18:18:47.697194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.765 [2024-04-15 18:18:47.697329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.765 [2024-04-15 18:18:47.697357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.765 [2024-04-15 18:18:47.697373] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.765 [2024-04-15 18:18:47.697388] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.765 [2024-04-15 18:18:47.697423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.765 qpair failed and we were unable to recover it. 00:31:58.765 [2024-04-15 18:18:47.707232] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.765 [2024-04-15 18:18:47.707377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.765 [2024-04-15 18:18:47.707405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.765 [2024-04-15 18:18:47.707422] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.765 [2024-04-15 18:18:47.707442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:58.765 [2024-04-15 18:18:47.707475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.765 qpair failed and we were unable to recover it. 00:31:59.024 [2024-04-15 18:18:47.717401] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.024 [2024-04-15 18:18:47.717574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.024 [2024-04-15 18:18:47.717603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.024 [2024-04-15 18:18:47.717619] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.024 [2024-04-15 18:18:47.717633] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.024 [2024-04-15 18:18:47.717666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.024 qpair failed and we were unable to recover it. 00:31:59.024 [2024-04-15 18:18:47.727273] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.024 [2024-04-15 18:18:47.727414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.024 [2024-04-15 18:18:47.727443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.024 [2024-04-15 18:18:47.727459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.024 [2024-04-15 18:18:47.727473] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.024 [2024-04-15 18:18:47.727507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.024 qpair failed and we were unable to recover it. 00:31:59.024 [2024-04-15 18:18:47.737422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.024 [2024-04-15 18:18:47.737610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.024 [2024-04-15 18:18:47.737639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.024 [2024-04-15 18:18:47.737656] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.024 [2024-04-15 18:18:47.737670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.024 [2024-04-15 18:18:47.737703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.024 qpair failed and we were unable to recover it. 00:31:59.024 [2024-04-15 18:18:47.747347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.024 [2024-04-15 18:18:47.747478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.024 [2024-04-15 18:18:47.747506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.024 [2024-04-15 18:18:47.747522] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.024 [2024-04-15 18:18:47.747536] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.024 [2024-04-15 18:18:47.747569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.024 qpair failed and we were unable to recover it. 00:31:59.024 [2024-04-15 18:18:47.757516] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.024 [2024-04-15 18:18:47.757741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.024 [2024-04-15 18:18:47.757769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.024 [2024-04-15 18:18:47.757785] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.024 [2024-04-15 18:18:47.757799] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.024 [2024-04-15 18:18:47.757831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.024 qpair failed and we were unable to recover it. 00:31:59.024 [2024-04-15 18:18:47.767453] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.024 [2024-04-15 18:18:47.767593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.024 [2024-04-15 18:18:47.767622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.024 [2024-04-15 18:18:47.767638] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.024 [2024-04-15 18:18:47.767652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.025 [2024-04-15 18:18:47.767685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.025 qpair failed and we were unable to recover it. 00:31:59.025 [2024-04-15 18:18:47.777533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.025 [2024-04-15 18:18:47.777715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.025 [2024-04-15 18:18:47.777749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.025 [2024-04-15 18:18:47.777765] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.025 [2024-04-15 18:18:47.777779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.025 [2024-04-15 18:18:47.777811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.025 qpair failed and we were unable to recover it. 00:31:59.025 [2024-04-15 18:18:47.787533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.025 [2024-04-15 18:18:47.787666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.025 [2024-04-15 18:18:47.787694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.025 [2024-04-15 18:18:47.787711] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.025 [2024-04-15 18:18:47.787724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.025 [2024-04-15 18:18:47.787757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.025 qpair failed and we were unable to recover it. 00:31:59.025 [2024-04-15 18:18:47.797591] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.025 [2024-04-15 18:18:47.797735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.025 [2024-04-15 18:18:47.797763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.025 [2024-04-15 18:18:47.797786] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.025 [2024-04-15 18:18:47.797801] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.025 [2024-04-15 18:18:47.797833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.025 qpair failed and we were unable to recover it. 00:31:59.025 [2024-04-15 18:18:47.807556] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.025 [2024-04-15 18:18:47.807689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.025 [2024-04-15 18:18:47.807717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.025 [2024-04-15 18:18:47.807733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.025 [2024-04-15 18:18:47.807747] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.025 [2024-04-15 18:18:47.807780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.025 qpair failed and we were unable to recover it. 00:31:59.025 [2024-04-15 18:18:47.817544] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.025 [2024-04-15 18:18:47.817674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.025 [2024-04-15 18:18:47.817702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.025 [2024-04-15 18:18:47.817718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.025 [2024-04-15 18:18:47.817732] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.025 [2024-04-15 18:18:47.817765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.025 qpair failed and we were unable to recover it. 00:31:59.025 [2024-04-15 18:18:47.827671] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.025 [2024-04-15 18:18:47.827842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.025 [2024-04-15 18:18:47.827870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.025 [2024-04-15 18:18:47.827886] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.025 [2024-04-15 18:18:47.827901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.025 [2024-04-15 18:18:47.827933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.025 qpair failed and we were unable to recover it. 00:31:59.025 [2024-04-15 18:18:47.837737] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.025 [2024-04-15 18:18:47.837894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.025 [2024-04-15 18:18:47.837921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.025 [2024-04-15 18:18:47.837938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.025 [2024-04-15 18:18:47.837951] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.025 [2024-04-15 18:18:47.837985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.025 qpair failed and we were unable to recover it. 00:31:59.025 [2024-04-15 18:18:47.847687] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.025 [2024-04-15 18:18:47.847882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.025 [2024-04-15 18:18:47.847910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.025 [2024-04-15 18:18:47.847926] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.025 [2024-04-15 18:18:47.847940] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.025 [2024-04-15 18:18:47.847972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.025 qpair failed and we were unable to recover it. 00:31:59.025 [2024-04-15 18:18:47.857652] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.025 [2024-04-15 18:18:47.857787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.025 [2024-04-15 18:18:47.857814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.025 [2024-04-15 18:18:47.857831] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.025 [2024-04-15 18:18:47.857845] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.025 [2024-04-15 18:18:47.857878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.025 qpair failed and we were unable to recover it. 00:31:59.025 [2024-04-15 18:18:47.867824] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.025 [2024-04-15 18:18:47.867989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.025 [2024-04-15 18:18:47.868017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.025 [2024-04-15 18:18:47.868033] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.025 [2024-04-15 18:18:47.868046] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.025 [2024-04-15 18:18:47.868087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.025 qpair failed and we were unable to recover it. 00:31:59.025 [2024-04-15 18:18:47.877845] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.025 [2024-04-15 18:18:47.877989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.025 [2024-04-15 18:18:47.878017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.025 [2024-04-15 18:18:47.878033] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.025 [2024-04-15 18:18:47.878046] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.025 [2024-04-15 18:18:47.878087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.025 qpair failed and we were unable to recover it. 00:31:59.025 [2024-04-15 18:18:47.887754] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.025 [2024-04-15 18:18:47.887908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.025 [2024-04-15 18:18:47.887936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.025 [2024-04-15 18:18:47.887959] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.025 [2024-04-15 18:18:47.887974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.025 [2024-04-15 18:18:47.888006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.025 qpair failed and we were unable to recover it. 00:31:59.025 [2024-04-15 18:18:47.897764] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.026 [2024-04-15 18:18:47.897905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.026 [2024-04-15 18:18:47.897933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.026 [2024-04-15 18:18:47.897949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.026 [2024-04-15 18:18:47.897963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.026 [2024-04-15 18:18:47.897996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.026 qpair failed and we were unable to recover it. 00:31:59.026 [2024-04-15 18:18:47.907827] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.026 [2024-04-15 18:18:47.908003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.026 [2024-04-15 18:18:47.908031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.026 [2024-04-15 18:18:47.908047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.026 [2024-04-15 18:18:47.908069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.026 [2024-04-15 18:18:47.908104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.026 qpair failed and we were unable to recover it. 00:31:59.026 [2024-04-15 18:18:47.917887] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.026 [2024-04-15 18:18:47.918040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.026 [2024-04-15 18:18:47.918077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.026 [2024-04-15 18:18:47.918094] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.026 [2024-04-15 18:18:47.918108] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.026 [2024-04-15 18:18:47.918141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.026 qpair failed and we were unable to recover it. 00:31:59.026 [2024-04-15 18:18:47.927870] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.026 [2024-04-15 18:18:47.928086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.026 [2024-04-15 18:18:47.928115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.026 [2024-04-15 18:18:47.928131] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.026 [2024-04-15 18:18:47.928145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.026 [2024-04-15 18:18:47.928179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.026 qpair failed and we were unable to recover it. 00:31:59.026 [2024-04-15 18:18:47.937929] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.026 [2024-04-15 18:18:47.938072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.026 [2024-04-15 18:18:47.938102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.026 [2024-04-15 18:18:47.938118] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.026 [2024-04-15 18:18:47.938131] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.026 [2024-04-15 18:18:47.938164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.026 qpair failed and we were unable to recover it. 00:31:59.026 [2024-04-15 18:18:47.947953] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.026 [2024-04-15 18:18:47.948128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.026 [2024-04-15 18:18:47.948156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.026 [2024-04-15 18:18:47.948172] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.026 [2024-04-15 18:18:47.948186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.026 [2024-04-15 18:18:47.948219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.026 qpair failed and we were unable to recover it. 00:31:59.026 [2024-04-15 18:18:47.958002] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.026 [2024-04-15 18:18:47.958154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.026 [2024-04-15 18:18:47.958182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.026 [2024-04-15 18:18:47.958199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.026 [2024-04-15 18:18:47.958214] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.026 [2024-04-15 18:18:47.958246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.026 qpair failed and we were unable to recover it. 00:31:59.026 [2024-04-15 18:18:47.967981] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.026 [2024-04-15 18:18:47.968115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.026 [2024-04-15 18:18:47.968143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.026 [2024-04-15 18:18:47.968159] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.026 [2024-04-15 18:18:47.968173] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.026 [2024-04-15 18:18:47.968207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.026 qpair failed and we were unable to recover it. 00:31:59.286 [2024-04-15 18:18:47.978025] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.286 [2024-04-15 18:18:47.978184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.286 [2024-04-15 18:18:47.978220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.286 [2024-04-15 18:18:47.978238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.286 [2024-04-15 18:18:47.978252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.286 [2024-04-15 18:18:47.978286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.286 qpair failed and we were unable to recover it. 00:31:59.286 [2024-04-15 18:18:47.988104] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.286 [2024-04-15 18:18:47.988272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.286 [2024-04-15 18:18:47.988301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.286 [2024-04-15 18:18:47.988317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.286 [2024-04-15 18:18:47.988331] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.286 [2024-04-15 18:18:47.988365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.286 qpair failed and we were unable to recover it. 00:31:59.286 [2024-04-15 18:18:47.998079] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.286 [2024-04-15 18:18:47.998224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.286 [2024-04-15 18:18:47.998253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.286 [2024-04-15 18:18:47.998269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.286 [2024-04-15 18:18:47.998283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.286 [2024-04-15 18:18:47.998316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.286 qpair failed and we were unable to recover it. 00:31:59.286 [2024-04-15 18:18:48.008077] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.286 [2024-04-15 18:18:48.008216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.286 [2024-04-15 18:18:48.008244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.287 [2024-04-15 18:18:48.008261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.287 [2024-04-15 18:18:48.008274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.287 [2024-04-15 18:18:48.008307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.287 qpair failed and we were unable to recover it. 00:31:59.287 [2024-04-15 18:18:48.018178] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.287 [2024-04-15 18:18:48.018310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.287 [2024-04-15 18:18:48.018339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.287 [2024-04-15 18:18:48.018354] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.287 [2024-04-15 18:18:48.018368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.287 [2024-04-15 18:18:48.018407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.287 qpair failed and we were unable to recover it. 00:31:59.287 [2024-04-15 18:18:48.028163] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.287 [2024-04-15 18:18:48.028325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.287 [2024-04-15 18:18:48.028353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.287 [2024-04-15 18:18:48.028369] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.287 [2024-04-15 18:18:48.028383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.287 [2024-04-15 18:18:48.028415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.287 qpair failed and we were unable to recover it. 00:31:59.287 [2024-04-15 18:18:48.038208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.287 [2024-04-15 18:18:48.038365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.287 [2024-04-15 18:18:48.038393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.287 [2024-04-15 18:18:48.038410] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.287 [2024-04-15 18:18:48.038423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.287 [2024-04-15 18:18:48.038456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.287 qpair failed and we were unable to recover it. 00:31:59.287 [2024-04-15 18:18:48.048177] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.287 [2024-04-15 18:18:48.048316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.287 [2024-04-15 18:18:48.048344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.287 [2024-04-15 18:18:48.048360] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.287 [2024-04-15 18:18:48.048374] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.287 [2024-04-15 18:18:48.048406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.287 qpair failed and we were unable to recover it. 00:31:59.287 [2024-04-15 18:18:48.058226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.287 [2024-04-15 18:18:48.058369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.287 [2024-04-15 18:18:48.058396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.287 [2024-04-15 18:18:48.058412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.287 [2024-04-15 18:18:48.058427] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.287 [2024-04-15 18:18:48.058459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.287 qpair failed and we were unable to recover it. 00:31:59.287 [2024-04-15 18:18:48.068256] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.287 [2024-04-15 18:18:48.068392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.287 [2024-04-15 18:18:48.068425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.287 [2024-04-15 18:18:48.068442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.287 [2024-04-15 18:18:48.068456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.287 [2024-04-15 18:18:48.068489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.287 qpair failed and we were unable to recover it. 00:31:59.287 [2024-04-15 18:18:48.078336] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.287 [2024-04-15 18:18:48.078487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.287 [2024-04-15 18:18:48.078515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.287 [2024-04-15 18:18:48.078531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.287 [2024-04-15 18:18:48.078545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.287 [2024-04-15 18:18:48.078577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.287 qpair failed and we were unable to recover it. 00:31:59.287 [2024-04-15 18:18:48.088320] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.287 [2024-04-15 18:18:48.088454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.287 [2024-04-15 18:18:48.088482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.287 [2024-04-15 18:18:48.088499] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.287 [2024-04-15 18:18:48.088513] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.287 [2024-04-15 18:18:48.088546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.287 qpair failed and we were unable to recover it. 00:31:59.287 [2024-04-15 18:18:48.098340] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.287 [2024-04-15 18:18:48.098469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.287 [2024-04-15 18:18:48.098497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.287 [2024-04-15 18:18:48.098514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.287 [2024-04-15 18:18:48.098528] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.287 [2024-04-15 18:18:48.098561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.287 qpair failed and we were unable to recover it. 00:31:59.287 [2024-04-15 18:18:48.108376] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.287 [2024-04-15 18:18:48.108514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.287 [2024-04-15 18:18:48.108542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.287 [2024-04-15 18:18:48.108559] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.287 [2024-04-15 18:18:48.108578] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.287 [2024-04-15 18:18:48.108612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.287 qpair failed and we were unable to recover it. 00:31:59.287 [2024-04-15 18:18:48.118427] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.287 [2024-04-15 18:18:48.118583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.287 [2024-04-15 18:18:48.118610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.287 [2024-04-15 18:18:48.118627] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.287 [2024-04-15 18:18:48.118640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.287 [2024-04-15 18:18:48.118673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.287 qpair failed and we were unable to recover it. 00:31:59.287 [2024-04-15 18:18:48.128417] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.287 [2024-04-15 18:18:48.128563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.287 [2024-04-15 18:18:48.128590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.287 [2024-04-15 18:18:48.128606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.287 [2024-04-15 18:18:48.128620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.287 [2024-04-15 18:18:48.128653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.287 qpair failed and we were unable to recover it. 00:31:59.287 [2024-04-15 18:18:48.138469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.287 [2024-04-15 18:18:48.138607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.287 [2024-04-15 18:18:48.138634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.287 [2024-04-15 18:18:48.138651] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.287 [2024-04-15 18:18:48.138665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.287 [2024-04-15 18:18:48.138697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.287 qpair failed and we were unable to recover it. 00:31:59.287 [2024-04-15 18:18:48.148501] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.288 [2024-04-15 18:18:48.148680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.288 [2024-04-15 18:18:48.148707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.288 [2024-04-15 18:18:48.148723] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.288 [2024-04-15 18:18:48.148737] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.288 [2024-04-15 18:18:48.148769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.288 qpair failed and we were unable to recover it. 00:31:59.288 [2024-04-15 18:18:48.158552] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.288 [2024-04-15 18:18:48.158697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.288 [2024-04-15 18:18:48.158725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.288 [2024-04-15 18:18:48.158741] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.288 [2024-04-15 18:18:48.158755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.288 [2024-04-15 18:18:48.158787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.288 qpair failed and we were unable to recover it. 00:31:59.288 [2024-04-15 18:18:48.168542] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.288 [2024-04-15 18:18:48.168677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.288 [2024-04-15 18:18:48.168704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.288 [2024-04-15 18:18:48.168719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.288 [2024-04-15 18:18:48.168733] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.288 [2024-04-15 18:18:48.168766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.288 qpair failed and we were unable to recover it. 00:31:59.288 [2024-04-15 18:18:48.178640] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.288 [2024-04-15 18:18:48.178777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.288 [2024-04-15 18:18:48.178805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.288 [2024-04-15 18:18:48.178832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.288 [2024-04-15 18:18:48.178846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.288 [2024-04-15 18:18:48.178879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.288 qpair failed and we were unable to recover it. 00:31:59.288 [2024-04-15 18:18:48.188618] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.288 [2024-04-15 18:18:48.188756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.288 [2024-04-15 18:18:48.188784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.288 [2024-04-15 18:18:48.188800] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.288 [2024-04-15 18:18:48.188814] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.288 [2024-04-15 18:18:48.188847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.288 qpair failed and we were unable to recover it. 00:31:59.288 [2024-04-15 18:18:48.198649] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.288 [2024-04-15 18:18:48.198785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.288 [2024-04-15 18:18:48.198813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.288 [2024-04-15 18:18:48.198834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.288 [2024-04-15 18:18:48.198849] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.288 [2024-04-15 18:18:48.198881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.288 qpair failed and we were unable to recover it. 00:31:59.288 [2024-04-15 18:18:48.208658] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.288 [2024-04-15 18:18:48.208795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.288 [2024-04-15 18:18:48.208823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.288 [2024-04-15 18:18:48.208839] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.288 [2024-04-15 18:18:48.208853] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.288 [2024-04-15 18:18:48.208886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.288 qpair failed and we were unable to recover it. 00:31:59.288 [2024-04-15 18:18:48.218689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.288 [2024-04-15 18:18:48.218824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.288 [2024-04-15 18:18:48.218852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.288 [2024-04-15 18:18:48.218869] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.288 [2024-04-15 18:18:48.218883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.288 [2024-04-15 18:18:48.218916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.288 qpair failed and we were unable to recover it. 00:31:59.288 [2024-04-15 18:18:48.228780] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.288 [2024-04-15 18:18:48.228930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.288 [2024-04-15 18:18:48.228958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.288 [2024-04-15 18:18:48.228973] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.288 [2024-04-15 18:18:48.228988] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.288 [2024-04-15 18:18:48.229021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.288 qpair failed and we were unable to recover it. 00:31:59.548 [2024-04-15 18:18:48.238822] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.548 [2024-04-15 18:18:48.238978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.548 [2024-04-15 18:18:48.239007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.548 [2024-04-15 18:18:48.239024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.548 [2024-04-15 18:18:48.239038] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.548 [2024-04-15 18:18:48.239080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.548 qpair failed and we were unable to recover it. 00:31:59.548 [2024-04-15 18:18:48.248776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.548 [2024-04-15 18:18:48.248921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.548 [2024-04-15 18:18:48.248951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.548 [2024-04-15 18:18:48.248967] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.548 [2024-04-15 18:18:48.248981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.548 [2024-04-15 18:18:48.249014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.548 qpair failed and we were unable to recover it. 00:31:59.548 [2024-04-15 18:18:48.258825] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.548 [2024-04-15 18:18:48.258964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.548 [2024-04-15 18:18:48.258993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.548 [2024-04-15 18:18:48.259009] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.548 [2024-04-15 18:18:48.259023] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.548 [2024-04-15 18:18:48.259055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.548 qpair failed and we were unable to recover it. 00:31:59.548 [2024-04-15 18:18:48.268831] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.548 [2024-04-15 18:18:48.268964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.548 [2024-04-15 18:18:48.268992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.548 [2024-04-15 18:18:48.269008] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.548 [2024-04-15 18:18:48.269022] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.548 [2024-04-15 18:18:48.269055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.548 qpair failed and we were unable to recover it. 00:31:59.548 [2024-04-15 18:18:48.278942] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.548 [2024-04-15 18:18:48.279087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.548 [2024-04-15 18:18:48.279116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.548 [2024-04-15 18:18:48.279133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.548 [2024-04-15 18:18:48.279146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.548 [2024-04-15 18:18:48.279179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.548 qpair failed and we were unable to recover it. 00:31:59.548 [2024-04-15 18:18:48.288861] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.548 [2024-04-15 18:18:48.288996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.548 [2024-04-15 18:18:48.289024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.548 [2024-04-15 18:18:48.289046] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.548 [2024-04-15 18:18:48.289070] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.548 [2024-04-15 18:18:48.289105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.548 qpair failed and we were unable to recover it. 00:31:59.548 [2024-04-15 18:18:48.298927] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.548 [2024-04-15 18:18:48.299067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.548 [2024-04-15 18:18:48.299095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.548 [2024-04-15 18:18:48.299113] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.548 [2024-04-15 18:18:48.299127] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.548 [2024-04-15 18:18:48.299161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.548 qpair failed and we were unable to recover it. 00:31:59.548 [2024-04-15 18:18:48.308949] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.548 [2024-04-15 18:18:48.309121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.548 [2024-04-15 18:18:48.309150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.548 [2024-04-15 18:18:48.309166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.548 [2024-04-15 18:18:48.309180] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.548 [2024-04-15 18:18:48.309213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.548 qpair failed and we were unable to recover it. 00:31:59.548 [2024-04-15 18:18:48.318968] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.548 [2024-04-15 18:18:48.319115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.548 [2024-04-15 18:18:48.319142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.548 [2024-04-15 18:18:48.319159] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.548 [2024-04-15 18:18:48.319172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.548 [2024-04-15 18:18:48.319205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.548 qpair failed and we were unable to recover it. 00:31:59.548 [2024-04-15 18:18:48.328980] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.548 [2024-04-15 18:18:48.329153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.548 [2024-04-15 18:18:48.329181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.548 [2024-04-15 18:18:48.329197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.548 [2024-04-15 18:18:48.329211] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.548 [2024-04-15 18:18:48.329244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.548 qpair failed and we were unable to recover it. 00:31:59.548 [2024-04-15 18:18:48.338995] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.548 [2024-04-15 18:18:48.339135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.548 [2024-04-15 18:18:48.339162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.548 [2024-04-15 18:18:48.339178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.548 [2024-04-15 18:18:48.339193] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.548 [2024-04-15 18:18:48.339226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.548 qpair failed and we were unable to recover it. 00:31:59.549 [2024-04-15 18:18:48.349066] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.549 [2024-04-15 18:18:48.349230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.549 [2024-04-15 18:18:48.349258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.549 [2024-04-15 18:18:48.349274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.549 [2024-04-15 18:18:48.349288] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.549 [2024-04-15 18:18:48.349320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.549 qpair failed and we were unable to recover it. 00:31:59.549 [2024-04-15 18:18:48.359109] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.549 [2024-04-15 18:18:48.359302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.549 [2024-04-15 18:18:48.359330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.549 [2024-04-15 18:18:48.359346] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.549 [2024-04-15 18:18:48.359361] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.549 [2024-04-15 18:18:48.359394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.549 qpair failed and we were unable to recover it. 00:31:59.549 [2024-04-15 18:18:48.369182] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.549 [2024-04-15 18:18:48.369353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.549 [2024-04-15 18:18:48.369381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.549 [2024-04-15 18:18:48.369397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.549 [2024-04-15 18:18:48.369411] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.549 [2024-04-15 18:18:48.369443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.549 qpair failed and we were unable to recover it. 00:31:59.549 [2024-04-15 18:18:48.379151] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.549 [2024-04-15 18:18:48.379299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.549 [2024-04-15 18:18:48.379333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.549 [2024-04-15 18:18:48.379350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.549 [2024-04-15 18:18:48.379364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.549 [2024-04-15 18:18:48.379397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.549 qpair failed and we were unable to recover it. 00:31:59.549 [2024-04-15 18:18:48.389232] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.549 [2024-04-15 18:18:48.389386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.549 [2024-04-15 18:18:48.389414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.549 [2024-04-15 18:18:48.389430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.549 [2024-04-15 18:18:48.389444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.549 [2024-04-15 18:18:48.389476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.549 qpair failed and we were unable to recover it. 00:31:59.549 [2024-04-15 18:18:48.399220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.549 [2024-04-15 18:18:48.399373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.549 [2024-04-15 18:18:48.399401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.549 [2024-04-15 18:18:48.399417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.549 [2024-04-15 18:18:48.399431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.549 [2024-04-15 18:18:48.399464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.549 qpair failed and we were unable to recover it. 00:31:59.549 [2024-04-15 18:18:48.409203] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.549 [2024-04-15 18:18:48.409340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.549 [2024-04-15 18:18:48.409369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.549 [2024-04-15 18:18:48.409385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.549 [2024-04-15 18:18:48.409399] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.549 [2024-04-15 18:18:48.409431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.549 qpair failed and we were unable to recover it. 00:31:59.549 [2024-04-15 18:18:48.419333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.549 [2024-04-15 18:18:48.419471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.549 [2024-04-15 18:18:48.419499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.549 [2024-04-15 18:18:48.419522] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.549 [2024-04-15 18:18:48.419536] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.549 [2024-04-15 18:18:48.419578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.549 qpair failed and we were unable to recover it. 00:31:59.549 [2024-04-15 18:18:48.429340] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.549 [2024-04-15 18:18:48.429489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.549 [2024-04-15 18:18:48.429517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.549 [2024-04-15 18:18:48.429533] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.549 [2024-04-15 18:18:48.429546] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.549 [2024-04-15 18:18:48.429579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.549 qpair failed and we were unable to recover it. 00:31:59.549 [2024-04-15 18:18:48.439318] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.549 [2024-04-15 18:18:48.439459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.549 [2024-04-15 18:18:48.439487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.549 [2024-04-15 18:18:48.439503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.549 [2024-04-15 18:18:48.439517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.549 [2024-04-15 18:18:48.439550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.549 qpair failed and we were unable to recover it. 00:31:59.549 [2024-04-15 18:18:48.449309] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.549 [2024-04-15 18:18:48.449457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.549 [2024-04-15 18:18:48.449484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.549 [2024-04-15 18:18:48.449500] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.549 [2024-04-15 18:18:48.449514] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.549 [2024-04-15 18:18:48.449547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.549 qpair failed and we were unable to recover it. 00:31:59.549 [2024-04-15 18:18:48.459342] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.549 [2024-04-15 18:18:48.459476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.549 [2024-04-15 18:18:48.459505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.549 [2024-04-15 18:18:48.459521] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.549 [2024-04-15 18:18:48.459535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.549 [2024-04-15 18:18:48.459567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.549 qpair failed and we were unable to recover it. 00:31:59.549 [2024-04-15 18:18:48.469374] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.549 [2024-04-15 18:18:48.469508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.549 [2024-04-15 18:18:48.469542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.549 [2024-04-15 18:18:48.469560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.549 [2024-04-15 18:18:48.469574] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.549 [2024-04-15 18:18:48.469606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.549 qpair failed and we were unable to recover it. 00:31:59.549 [2024-04-15 18:18:48.479414] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.549 [2024-04-15 18:18:48.479569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.549 [2024-04-15 18:18:48.479597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.550 [2024-04-15 18:18:48.479613] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.550 [2024-04-15 18:18:48.479627] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.550 [2024-04-15 18:18:48.479659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.550 qpair failed and we were unable to recover it. 00:31:59.550 [2024-04-15 18:18:48.489438] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.550 [2024-04-15 18:18:48.489577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.550 [2024-04-15 18:18:48.489605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.550 [2024-04-15 18:18:48.489621] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.550 [2024-04-15 18:18:48.489636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.550 [2024-04-15 18:18:48.489668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.550 qpair failed and we were unable to recover it. 00:31:59.550 [2024-04-15 18:18:48.499493] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.550 [2024-04-15 18:18:48.499641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.550 [2024-04-15 18:18:48.499672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.550 [2024-04-15 18:18:48.499689] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.550 [2024-04-15 18:18:48.499704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.550 [2024-04-15 18:18:48.499738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.550 qpair failed and we were unable to recover it. 00:31:59.809 [2024-04-15 18:18:48.509487] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.809 [2024-04-15 18:18:48.509621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.809 [2024-04-15 18:18:48.509651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.809 [2024-04-15 18:18:48.509667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.809 [2024-04-15 18:18:48.509688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.809 [2024-04-15 18:18:48.509721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.809 qpair failed and we were unable to recover it. 00:31:59.809 [2024-04-15 18:18:48.519520] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.809 [2024-04-15 18:18:48.519660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.809 [2024-04-15 18:18:48.519688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.809 [2024-04-15 18:18:48.519704] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.809 [2024-04-15 18:18:48.519719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.809 [2024-04-15 18:18:48.519751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.809 qpair failed and we were unable to recover it. 00:31:59.809 [2024-04-15 18:18:48.529547] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.809 [2024-04-15 18:18:48.529683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.809 [2024-04-15 18:18:48.529712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.809 [2024-04-15 18:18:48.529727] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.809 [2024-04-15 18:18:48.529741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.809 [2024-04-15 18:18:48.529779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.809 qpair failed and we were unable to recover it. 00:31:59.809 [2024-04-15 18:18:48.539593] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.809 [2024-04-15 18:18:48.539723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.809 [2024-04-15 18:18:48.539751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.809 [2024-04-15 18:18:48.539767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.809 [2024-04-15 18:18:48.539781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.809 [2024-04-15 18:18:48.539813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.809 qpair failed and we were unable to recover it. 00:31:59.809 [2024-04-15 18:18:48.549592] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.810 [2024-04-15 18:18:48.549724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.810 [2024-04-15 18:18:48.549753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.810 [2024-04-15 18:18:48.549769] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.810 [2024-04-15 18:18:48.549783] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.810 [2024-04-15 18:18:48.549815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.810 qpair failed and we were unable to recover it. 00:31:59.810 [2024-04-15 18:18:48.559696] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.810 [2024-04-15 18:18:48.559893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.810 [2024-04-15 18:18:48.559920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.810 [2024-04-15 18:18:48.559936] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.810 [2024-04-15 18:18:48.559951] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.810 [2024-04-15 18:18:48.559983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.810 qpair failed and we were unable to recover it. 00:31:59.810 [2024-04-15 18:18:48.569640] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.810 [2024-04-15 18:18:48.569774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.810 [2024-04-15 18:18:48.569801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.810 [2024-04-15 18:18:48.569817] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.810 [2024-04-15 18:18:48.569831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.810 [2024-04-15 18:18:48.569863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.810 qpair failed and we were unable to recover it. 00:31:59.810 [2024-04-15 18:18:48.579759] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.810 [2024-04-15 18:18:48.579896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.810 [2024-04-15 18:18:48.579925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.810 [2024-04-15 18:18:48.579941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.810 [2024-04-15 18:18:48.579954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.810 [2024-04-15 18:18:48.579987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.810 qpair failed and we were unable to recover it. 00:31:59.810 [2024-04-15 18:18:48.589744] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.810 [2024-04-15 18:18:48.589897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.810 [2024-04-15 18:18:48.589925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.810 [2024-04-15 18:18:48.589944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.810 [2024-04-15 18:18:48.589959] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.810 [2024-04-15 18:18:48.589992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.810 qpair failed and we were unable to recover it. 00:31:59.810 [2024-04-15 18:18:48.599745] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.810 [2024-04-15 18:18:48.599906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.810 [2024-04-15 18:18:48.599934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.810 [2024-04-15 18:18:48.599951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.810 [2024-04-15 18:18:48.599971] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.810 [2024-04-15 18:18:48.600005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.810 qpair failed and we were unable to recover it. 00:31:59.810 [2024-04-15 18:18:48.609767] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.810 [2024-04-15 18:18:48.609909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.810 [2024-04-15 18:18:48.609936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.810 [2024-04-15 18:18:48.609953] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.810 [2024-04-15 18:18:48.609967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.810 [2024-04-15 18:18:48.610000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.810 qpair failed and we were unable to recover it. 00:31:59.810 [2024-04-15 18:18:48.619865] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.810 [2024-04-15 18:18:48.620041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.810 [2024-04-15 18:18:48.620081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.810 [2024-04-15 18:18:48.620099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.810 [2024-04-15 18:18:48.620113] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.810 [2024-04-15 18:18:48.620146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.810 qpair failed and we were unable to recover it. 00:31:59.810 [2024-04-15 18:18:48.629835] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.810 [2024-04-15 18:18:48.629967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.810 [2024-04-15 18:18:48.629996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.810 [2024-04-15 18:18:48.630013] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.810 [2024-04-15 18:18:48.630026] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.810 [2024-04-15 18:18:48.630071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.810 qpair failed and we were unable to recover it. 00:31:59.810 [2024-04-15 18:18:48.639870] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.810 [2024-04-15 18:18:48.640018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.810 [2024-04-15 18:18:48.640046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.810 [2024-04-15 18:18:48.640071] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.810 [2024-04-15 18:18:48.640086] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.810 [2024-04-15 18:18:48.640120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.810 qpair failed and we were unable to recover it. 00:31:59.810 [2024-04-15 18:18:48.649890] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.810 [2024-04-15 18:18:48.650071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.810 [2024-04-15 18:18:48.650099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.810 [2024-04-15 18:18:48.650115] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.810 [2024-04-15 18:18:48.650129] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.810 [2024-04-15 18:18:48.650162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.810 qpair failed and we were unable to recover it. 00:31:59.810 [2024-04-15 18:18:48.660018] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.810 [2024-04-15 18:18:48.660199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.810 [2024-04-15 18:18:48.660228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.810 [2024-04-15 18:18:48.660244] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.810 [2024-04-15 18:18:48.660258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.810 [2024-04-15 18:18:48.660291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.810 qpair failed and we were unable to recover it. 00:31:59.810 [2024-04-15 18:18:48.670066] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.810 [2024-04-15 18:18:48.670207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.810 [2024-04-15 18:18:48.670235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.810 [2024-04-15 18:18:48.670253] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.810 [2024-04-15 18:18:48.670267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.810 [2024-04-15 18:18:48.670299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.810 qpair failed and we were unable to recover it. 00:31:59.810 [2024-04-15 18:18:48.679982] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.811 [2024-04-15 18:18:48.680130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.811 [2024-04-15 18:18:48.680159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.811 [2024-04-15 18:18:48.680175] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.811 [2024-04-15 18:18:48.680189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.811 [2024-04-15 18:18:48.680222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.811 qpair failed and we were unable to recover it. 00:31:59.811 [2024-04-15 18:18:48.689974] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.811 [2024-04-15 18:18:48.690118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.811 [2024-04-15 18:18:48.690146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.811 [2024-04-15 18:18:48.690169] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.811 [2024-04-15 18:18:48.690184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.811 [2024-04-15 18:18:48.690218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.811 qpair failed and we were unable to recover it. 00:31:59.811 [2024-04-15 18:18:48.700020] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.811 [2024-04-15 18:18:48.700166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.811 [2024-04-15 18:18:48.700194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.811 [2024-04-15 18:18:48.700211] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.811 [2024-04-15 18:18:48.700225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.811 [2024-04-15 18:18:48.700258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.811 qpair failed and we were unable to recover it. 00:31:59.811 [2024-04-15 18:18:48.710029] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.811 [2024-04-15 18:18:48.710171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.811 [2024-04-15 18:18:48.710199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.811 [2024-04-15 18:18:48.710216] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.811 [2024-04-15 18:18:48.710231] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.811 [2024-04-15 18:18:48.710264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.811 qpair failed and we were unable to recover it. 00:31:59.811 [2024-04-15 18:18:48.720080] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.811 [2024-04-15 18:18:48.720224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.811 [2024-04-15 18:18:48.720252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.811 [2024-04-15 18:18:48.720269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.811 [2024-04-15 18:18:48.720283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.811 [2024-04-15 18:18:48.720316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.811 qpair failed and we were unable to recover it. 00:31:59.811 [2024-04-15 18:18:48.730195] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.811 [2024-04-15 18:18:48.730388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.811 [2024-04-15 18:18:48.730415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.811 [2024-04-15 18:18:48.730431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.811 [2024-04-15 18:18:48.730445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.811 [2024-04-15 18:18:48.730479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.811 qpair failed and we were unable to recover it. 00:31:59.811 [2024-04-15 18:18:48.740126] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.811 [2024-04-15 18:18:48.740273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.811 [2024-04-15 18:18:48.740301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.811 [2024-04-15 18:18:48.740317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.811 [2024-04-15 18:18:48.740331] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.811 [2024-04-15 18:18:48.740364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.811 qpair failed and we were unable to recover it. 00:31:59.811 [2024-04-15 18:18:48.750153] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.811 [2024-04-15 18:18:48.750296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.811 [2024-04-15 18:18:48.750324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.811 [2024-04-15 18:18:48.750341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.811 [2024-04-15 18:18:48.750355] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.811 [2024-04-15 18:18:48.750387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.811 qpair failed and we were unable to recover it. 00:31:59.811 [2024-04-15 18:18:48.760274] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.811 [2024-04-15 18:18:48.760409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.811 [2024-04-15 18:18:48.760441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.811 [2024-04-15 18:18:48.760472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.811 [2024-04-15 18:18:48.760489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:31:59.811 [2024-04-15 18:18:48.760523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.811 qpair failed and we were unable to recover it. 00:32:00.071 [2024-04-15 18:18:48.770297] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.071 [2024-04-15 18:18:48.770505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.071 [2024-04-15 18:18:48.770535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.071 [2024-04-15 18:18:48.770551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.071 [2024-04-15 18:18:48.770566] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.071 [2024-04-15 18:18:48.770599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.071 qpair failed and we were unable to recover it. 00:32:00.071 [2024-04-15 18:18:48.780234] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.071 [2024-04-15 18:18:48.780375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.071 [2024-04-15 18:18:48.780410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.071 [2024-04-15 18:18:48.780428] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.071 [2024-04-15 18:18:48.780442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.071 [2024-04-15 18:18:48.780475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.071 qpair failed and we were unable to recover it. 00:32:00.071 [2024-04-15 18:18:48.790279] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.071 [2024-04-15 18:18:48.790422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.071 [2024-04-15 18:18:48.790450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.071 [2024-04-15 18:18:48.790467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.071 [2024-04-15 18:18:48.790481] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.071 [2024-04-15 18:18:48.790515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.071 qpair failed and we were unable to recover it. 00:32:00.071 [2024-04-15 18:18:48.800318] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.071 [2024-04-15 18:18:48.800463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.071 [2024-04-15 18:18:48.800492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.071 [2024-04-15 18:18:48.800508] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.071 [2024-04-15 18:18:48.800522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.071 [2024-04-15 18:18:48.800554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.071 qpair failed and we were unable to recover it. 00:32:00.071 [2024-04-15 18:18:48.810340] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.071 [2024-04-15 18:18:48.810480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.071 [2024-04-15 18:18:48.810509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.071 [2024-04-15 18:18:48.810525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.071 [2024-04-15 18:18:48.810539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.071 [2024-04-15 18:18:48.810572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.071 qpair failed and we were unable to recover it. 00:32:00.071 [2024-04-15 18:18:48.820333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.071 [2024-04-15 18:18:48.820509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.071 [2024-04-15 18:18:48.820537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.071 [2024-04-15 18:18:48.820553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.071 [2024-04-15 18:18:48.820567] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.071 [2024-04-15 18:18:48.820606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.071 qpair failed and we were unable to recover it. 00:32:00.071 [2024-04-15 18:18:48.830424] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.071 [2024-04-15 18:18:48.830594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.071 [2024-04-15 18:18:48.830621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.071 [2024-04-15 18:18:48.830638] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.071 [2024-04-15 18:18:48.830652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.071 [2024-04-15 18:18:48.830685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.071 qpair failed and we were unable to recover it. 00:32:00.071 [2024-04-15 18:18:48.840433] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.071 [2024-04-15 18:18:48.840582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.071 [2024-04-15 18:18:48.840609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.071 [2024-04-15 18:18:48.840625] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.071 [2024-04-15 18:18:48.840639] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.072 [2024-04-15 18:18:48.840672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.072 qpair failed and we were unable to recover it. 00:32:00.072 [2024-04-15 18:18:48.850502] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.072 [2024-04-15 18:18:48.850643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.072 [2024-04-15 18:18:48.850671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.072 [2024-04-15 18:18:48.850687] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.072 [2024-04-15 18:18:48.850701] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.072 [2024-04-15 18:18:48.850734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.072 qpair failed and we were unable to recover it. 00:32:00.072 [2024-04-15 18:18:48.860457] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.072 [2024-04-15 18:18:48.860598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.072 [2024-04-15 18:18:48.860626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.072 [2024-04-15 18:18:48.860642] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.072 [2024-04-15 18:18:48.860656] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.072 [2024-04-15 18:18:48.860688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.072 qpair failed and we were unable to recover it. 00:32:00.072 [2024-04-15 18:18:48.870500] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.072 [2024-04-15 18:18:48.870680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.072 [2024-04-15 18:18:48.870714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.072 [2024-04-15 18:18:48.870731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.072 [2024-04-15 18:18:48.870745] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.072 [2024-04-15 18:18:48.870777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.072 qpair failed and we were unable to recover it. 00:32:00.072 [2024-04-15 18:18:48.880547] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.072 [2024-04-15 18:18:48.880699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.072 [2024-04-15 18:18:48.880727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.072 [2024-04-15 18:18:48.880743] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.072 [2024-04-15 18:18:48.880757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.072 [2024-04-15 18:18:48.880790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.072 qpair failed and we were unable to recover it. 00:32:00.072 [2024-04-15 18:18:48.890625] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.072 [2024-04-15 18:18:48.890772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.072 [2024-04-15 18:18:48.890801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.072 [2024-04-15 18:18:48.890817] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.072 [2024-04-15 18:18:48.890831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.072 [2024-04-15 18:18:48.890864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.072 qpair failed and we were unable to recover it. 00:32:00.072 [2024-04-15 18:18:48.900613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.072 [2024-04-15 18:18:48.900749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.072 [2024-04-15 18:18:48.900788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.072 [2024-04-15 18:18:48.900804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.072 [2024-04-15 18:18:48.900818] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.072 [2024-04-15 18:18:48.900851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.072 qpair failed and we were unable to recover it. 00:32:00.072 [2024-04-15 18:18:48.910612] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.072 [2024-04-15 18:18:48.910740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.072 [2024-04-15 18:18:48.910768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.072 [2024-04-15 18:18:48.910785] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.072 [2024-04-15 18:18:48.910799] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.072 [2024-04-15 18:18:48.910838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.072 qpair failed and we were unable to recover it. 00:32:00.072 [2024-04-15 18:18:48.920656] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.072 [2024-04-15 18:18:48.920837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.072 [2024-04-15 18:18:48.920865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.072 [2024-04-15 18:18:48.920881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.072 [2024-04-15 18:18:48.920895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.072 [2024-04-15 18:18:48.920928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.072 qpair failed and we were unable to recover it. 00:32:00.072 [2024-04-15 18:18:48.930677] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.072 [2024-04-15 18:18:48.930814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.072 [2024-04-15 18:18:48.930842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.072 [2024-04-15 18:18:48.930859] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.072 [2024-04-15 18:18:48.930873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.072 [2024-04-15 18:18:48.930905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.072 qpair failed and we were unable to recover it. 00:32:00.072 [2024-04-15 18:18:48.940715] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.072 [2024-04-15 18:18:48.940850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.072 [2024-04-15 18:18:48.940879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.072 [2024-04-15 18:18:48.940895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.072 [2024-04-15 18:18:48.940909] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.072 [2024-04-15 18:18:48.940941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.072 qpair failed and we were unable to recover it. 00:32:00.072 [2024-04-15 18:18:48.950738] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.072 [2024-04-15 18:18:48.950890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.072 [2024-04-15 18:18:48.950918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.072 [2024-04-15 18:18:48.950934] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.072 [2024-04-15 18:18:48.950948] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.072 [2024-04-15 18:18:48.950982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.072 qpair failed and we were unable to recover it. 00:32:00.072 [2024-04-15 18:18:48.960775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.072 [2024-04-15 18:18:48.960931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.072 [2024-04-15 18:18:48.960959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.072 [2024-04-15 18:18:48.960976] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.072 [2024-04-15 18:18:48.960990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.072 [2024-04-15 18:18:48.961022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.072 qpair failed and we were unable to recover it. 00:32:00.072 [2024-04-15 18:18:48.970768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.072 [2024-04-15 18:18:48.970914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.072 [2024-04-15 18:18:48.970941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.072 [2024-04-15 18:18:48.970958] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.072 [2024-04-15 18:18:48.970972] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.072 [2024-04-15 18:18:48.971005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.072 qpair failed and we were unable to recover it. 00:32:00.072 [2024-04-15 18:18:48.980837] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.072 [2024-04-15 18:18:48.981009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.073 [2024-04-15 18:18:48.981037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.073 [2024-04-15 18:18:48.981053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.073 [2024-04-15 18:18:48.981075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.073 [2024-04-15 18:18:48.981110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.073 qpair failed and we were unable to recover it. 00:32:00.073 [2024-04-15 18:18:48.990870] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.073 [2024-04-15 18:18:48.991003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.073 [2024-04-15 18:18:48.991031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.073 [2024-04-15 18:18:48.991047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.073 [2024-04-15 18:18:48.991071] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.073 [2024-04-15 18:18:48.991106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.073 qpair failed and we were unable to recover it. 00:32:00.073 [2024-04-15 18:18:49.000913] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.073 [2024-04-15 18:18:49.001074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.073 [2024-04-15 18:18:49.001102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.073 [2024-04-15 18:18:49.001119] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.073 [2024-04-15 18:18:49.001138] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.073 [2024-04-15 18:18:49.001172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.073 qpair failed and we were unable to recover it. 00:32:00.073 [2024-04-15 18:18:49.010896] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.073 [2024-04-15 18:18:49.011034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.073 [2024-04-15 18:18:49.011069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.073 [2024-04-15 18:18:49.011087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.073 [2024-04-15 18:18:49.011102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.073 [2024-04-15 18:18:49.011135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.073 qpair failed and we were unable to recover it. 00:32:00.073 [2024-04-15 18:18:49.020910] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.073 [2024-04-15 18:18:49.021077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.073 [2024-04-15 18:18:49.021115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.073 [2024-04-15 18:18:49.021134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.073 [2024-04-15 18:18:49.021148] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.073 [2024-04-15 18:18:49.021182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.073 qpair failed and we were unable to recover it. 00:32:00.332 [2024-04-15 18:18:49.030955] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.332 [2024-04-15 18:18:49.031089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.332 [2024-04-15 18:18:49.031119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.332 [2024-04-15 18:18:49.031135] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.332 [2024-04-15 18:18:49.031150] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.332 [2024-04-15 18:18:49.031183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.332 qpair failed and we were unable to recover it. 00:32:00.332 [2024-04-15 18:18:49.040986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.332 [2024-04-15 18:18:49.041137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.332 [2024-04-15 18:18:49.041166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.332 [2024-04-15 18:18:49.041183] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.332 [2024-04-15 18:18:49.041197] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.332 [2024-04-15 18:18:49.041230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.332 qpair failed and we were unable to recover it. 00:32:00.332 [2024-04-15 18:18:49.051005] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.332 [2024-04-15 18:18:49.051173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.332 [2024-04-15 18:18:49.051201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.332 [2024-04-15 18:18:49.051218] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.332 [2024-04-15 18:18:49.051232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.332 [2024-04-15 18:18:49.051264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.332 qpair failed and we were unable to recover it. 00:32:00.332 [2024-04-15 18:18:49.061095] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.332 [2024-04-15 18:18:49.061233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.332 [2024-04-15 18:18:49.061261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.332 [2024-04-15 18:18:49.061277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.332 [2024-04-15 18:18:49.061291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.332 [2024-04-15 18:18:49.061325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.332 qpair failed and we were unable to recover it. 00:32:00.332 [2024-04-15 18:18:49.071047] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.332 [2024-04-15 18:18:49.071194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.332 [2024-04-15 18:18:49.071222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.332 [2024-04-15 18:18:49.071238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.332 [2024-04-15 18:18:49.071252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.332 [2024-04-15 18:18:49.071284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.333 qpair failed and we were unable to recover it. 00:32:00.333 [2024-04-15 18:18:49.081202] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.333 [2024-04-15 18:18:49.081342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.333 [2024-04-15 18:18:49.081370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.333 [2024-04-15 18:18:49.081387] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.333 [2024-04-15 18:18:49.081401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.333 [2024-04-15 18:18:49.081434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.333 qpair failed and we were unable to recover it. 00:32:00.333 [2024-04-15 18:18:49.091125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.333 [2024-04-15 18:18:49.091281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.333 [2024-04-15 18:18:49.091309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.333 [2024-04-15 18:18:49.091331] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.333 [2024-04-15 18:18:49.091347] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.333 [2024-04-15 18:18:49.091379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.333 qpair failed and we were unable to recover it. 00:32:00.333 [2024-04-15 18:18:49.101166] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.333 [2024-04-15 18:18:49.101363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.333 [2024-04-15 18:18:49.101390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.333 [2024-04-15 18:18:49.101406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.333 [2024-04-15 18:18:49.101421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.333 [2024-04-15 18:18:49.101454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.333 qpair failed and we were unable to recover it. 00:32:00.333 [2024-04-15 18:18:49.111279] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.333 [2024-04-15 18:18:49.111424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.333 [2024-04-15 18:18:49.111452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.333 [2024-04-15 18:18:49.111468] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.333 [2024-04-15 18:18:49.111482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.333 [2024-04-15 18:18:49.111515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.333 qpair failed and we were unable to recover it. 00:32:00.333 [2024-04-15 18:18:49.121251] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.333 [2024-04-15 18:18:49.121406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.333 [2024-04-15 18:18:49.121434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.333 [2024-04-15 18:18:49.121450] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.333 [2024-04-15 18:18:49.121464] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.333 [2024-04-15 18:18:49.121497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.333 qpair failed and we were unable to recover it. 00:32:00.333 [2024-04-15 18:18:49.131240] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.333 [2024-04-15 18:18:49.131383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.333 [2024-04-15 18:18:49.131411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.333 [2024-04-15 18:18:49.131427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.333 [2024-04-15 18:18:49.131440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.333 [2024-04-15 18:18:49.131473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.333 qpair failed and we were unable to recover it. 00:32:00.333 [2024-04-15 18:18:49.141262] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.333 [2024-04-15 18:18:49.141402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.333 [2024-04-15 18:18:49.141430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.333 [2024-04-15 18:18:49.141446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.333 [2024-04-15 18:18:49.141460] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.333 [2024-04-15 18:18:49.141493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.333 qpair failed and we were unable to recover it. 00:32:00.333 [2024-04-15 18:18:49.151359] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.333 [2024-04-15 18:18:49.151496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.333 [2024-04-15 18:18:49.151525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.333 [2024-04-15 18:18:49.151541] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.333 [2024-04-15 18:18:49.151555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.333 [2024-04-15 18:18:49.151591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.333 qpair failed and we were unable to recover it. 00:32:00.333 [2024-04-15 18:18:49.161357] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.333 [2024-04-15 18:18:49.161512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.333 [2024-04-15 18:18:49.161540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.333 [2024-04-15 18:18:49.161556] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.333 [2024-04-15 18:18:49.161570] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.333 [2024-04-15 18:18:49.161603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.333 qpair failed and we were unable to recover it. 00:32:00.333 [2024-04-15 18:18:49.171459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.333 [2024-04-15 18:18:49.171645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.333 [2024-04-15 18:18:49.171673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.333 [2024-04-15 18:18:49.171689] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.333 [2024-04-15 18:18:49.171703] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.333 [2024-04-15 18:18:49.171735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.333 qpair failed and we were unable to recover it. 00:32:00.333 [2024-04-15 18:18:49.181495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.333 [2024-04-15 18:18:49.181693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.333 [2024-04-15 18:18:49.181726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.333 [2024-04-15 18:18:49.181743] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.333 [2024-04-15 18:18:49.181757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.333 [2024-04-15 18:18:49.181789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.333 qpair failed and we were unable to recover it. 00:32:00.333 [2024-04-15 18:18:49.191391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.333 [2024-04-15 18:18:49.191537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.333 [2024-04-15 18:18:49.191565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.333 [2024-04-15 18:18:49.191581] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.333 [2024-04-15 18:18:49.191595] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.333 [2024-04-15 18:18:49.191627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.333 qpair failed and we were unable to recover it. 00:32:00.333 [2024-04-15 18:18:49.201469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.333 [2024-04-15 18:18:49.201609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.334 [2024-04-15 18:18:49.201637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.334 [2024-04-15 18:18:49.201654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.334 [2024-04-15 18:18:49.201667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.334 [2024-04-15 18:18:49.201700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.334 qpair failed and we were unable to recover it. 00:32:00.334 [2024-04-15 18:18:49.211459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.334 [2024-04-15 18:18:49.211605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.334 [2024-04-15 18:18:49.211632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.334 [2024-04-15 18:18:49.211649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.334 [2024-04-15 18:18:49.211663] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.334 [2024-04-15 18:18:49.211695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.334 qpair failed and we were unable to recover it. 00:32:00.334 [2024-04-15 18:18:49.221483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.334 [2024-04-15 18:18:49.221669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.334 [2024-04-15 18:18:49.221696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.334 [2024-04-15 18:18:49.221712] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.334 [2024-04-15 18:18:49.221726] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.334 [2024-04-15 18:18:49.221765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.334 qpair failed and we were unable to recover it. 00:32:00.334 [2024-04-15 18:18:49.231519] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.334 [2024-04-15 18:18:49.231652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.334 [2024-04-15 18:18:49.231681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.334 [2024-04-15 18:18:49.231697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.334 [2024-04-15 18:18:49.231711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.334 [2024-04-15 18:18:49.231743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.334 qpair failed and we were unable to recover it. 00:32:00.334 [2024-04-15 18:18:49.241595] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.334 [2024-04-15 18:18:49.241769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.334 [2024-04-15 18:18:49.241798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.334 [2024-04-15 18:18:49.241814] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.334 [2024-04-15 18:18:49.241828] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.334 [2024-04-15 18:18:49.241861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.334 qpair failed and we were unable to recover it. 00:32:00.334 [2024-04-15 18:18:49.251571] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.334 [2024-04-15 18:18:49.251728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.334 [2024-04-15 18:18:49.251756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.334 [2024-04-15 18:18:49.251773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.334 [2024-04-15 18:18:49.251787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.334 [2024-04-15 18:18:49.251819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.334 qpair failed and we were unable to recover it. 00:32:00.334 [2024-04-15 18:18:49.261640] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.334 [2024-04-15 18:18:49.261846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.334 [2024-04-15 18:18:49.261874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.334 [2024-04-15 18:18:49.261891] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.334 [2024-04-15 18:18:49.261905] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.334 [2024-04-15 18:18:49.261939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.334 qpair failed and we were unable to recover it. 00:32:00.334 [2024-04-15 18:18:49.271636] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.334 [2024-04-15 18:18:49.271778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.334 [2024-04-15 18:18:49.271812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.334 [2024-04-15 18:18:49.271829] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.334 [2024-04-15 18:18:49.271843] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.334 [2024-04-15 18:18:49.271875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.334 qpair failed and we were unable to recover it. 00:32:00.334 [2024-04-15 18:18:49.281777] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.334 [2024-04-15 18:18:49.281916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.334 [2024-04-15 18:18:49.281946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.334 [2024-04-15 18:18:49.281963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.334 [2024-04-15 18:18:49.281977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.334 [2024-04-15 18:18:49.282021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.334 qpair failed and we were unable to recover it. 00:32:00.593 [2024-04-15 18:18:49.291732] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.593 [2024-04-15 18:18:49.291883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.593 [2024-04-15 18:18:49.291912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.593 [2024-04-15 18:18:49.291929] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.593 [2024-04-15 18:18:49.291943] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.593 [2024-04-15 18:18:49.291977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.593 qpair failed and we were unable to recover it. 00:32:00.593 [2024-04-15 18:18:49.301814] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.593 [2024-04-15 18:18:49.302005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.593 [2024-04-15 18:18:49.302035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.593 [2024-04-15 18:18:49.302051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.593 [2024-04-15 18:18:49.302073] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.593 [2024-04-15 18:18:49.302116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.593 qpair failed and we were unable to recover it. 00:32:00.593 [2024-04-15 18:18:49.311758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.593 [2024-04-15 18:18:49.311891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.593 [2024-04-15 18:18:49.311920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.593 [2024-04-15 18:18:49.311936] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.593 [2024-04-15 18:18:49.311950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.593 [2024-04-15 18:18:49.311990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.593 qpair failed and we were unable to recover it. 00:32:00.593 [2024-04-15 18:18:49.321791] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.593 [2024-04-15 18:18:49.321931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.594 [2024-04-15 18:18:49.321959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.594 [2024-04-15 18:18:49.321975] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.594 [2024-04-15 18:18:49.321989] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.594 [2024-04-15 18:18:49.322022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.594 qpair failed and we were unable to recover it. 00:32:00.594 [2024-04-15 18:18:49.331902] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.594 [2024-04-15 18:18:49.332088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.594 [2024-04-15 18:18:49.332117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.594 [2024-04-15 18:18:49.332133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.594 [2024-04-15 18:18:49.332147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.594 [2024-04-15 18:18:49.332180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.594 qpair failed and we were unable to recover it. 00:32:00.594 [2024-04-15 18:18:49.341862] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.594 [2024-04-15 18:18:49.341996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.594 [2024-04-15 18:18:49.342024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.594 [2024-04-15 18:18:49.342040] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.594 [2024-04-15 18:18:49.342054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.594 [2024-04-15 18:18:49.342097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.594 qpair failed and we were unable to recover it. 00:32:00.594 [2024-04-15 18:18:49.351853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.594 [2024-04-15 18:18:49.351996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.594 [2024-04-15 18:18:49.352024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.594 [2024-04-15 18:18:49.352041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.594 [2024-04-15 18:18:49.352055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.594 [2024-04-15 18:18:49.352097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.594 qpair failed and we were unable to recover it. 00:32:00.594 [2024-04-15 18:18:49.361893] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.594 [2024-04-15 18:18:49.362073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.594 [2024-04-15 18:18:49.362108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.594 [2024-04-15 18:18:49.362125] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.594 [2024-04-15 18:18:49.362140] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.594 [2024-04-15 18:18:49.362173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.594 qpair failed and we were unable to recover it. 00:32:00.594 [2024-04-15 18:18:49.371924] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.594 [2024-04-15 18:18:49.372070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.594 [2024-04-15 18:18:49.372099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.594 [2024-04-15 18:18:49.372115] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.594 [2024-04-15 18:18:49.372129] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.594 [2024-04-15 18:18:49.372162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.594 qpair failed and we were unable to recover it. 00:32:00.594 [2024-04-15 18:18:49.381943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.594 [2024-04-15 18:18:49.382087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.594 [2024-04-15 18:18:49.382115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.594 [2024-04-15 18:18:49.382132] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.594 [2024-04-15 18:18:49.382146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.594 [2024-04-15 18:18:49.382179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.594 qpair failed and we were unable to recover it. 00:32:00.594 [2024-04-15 18:18:49.391986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.594 [2024-04-15 18:18:49.392128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.594 [2024-04-15 18:18:49.392157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.594 [2024-04-15 18:18:49.392173] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.594 [2024-04-15 18:18:49.392187] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.594 [2024-04-15 18:18:49.392220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.594 qpair failed and we were unable to recover it. 00:32:00.594 [2024-04-15 18:18:49.402038] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.594 [2024-04-15 18:18:49.402192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.594 [2024-04-15 18:18:49.402221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.594 [2024-04-15 18:18:49.402238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.594 [2024-04-15 18:18:49.402259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.594 [2024-04-15 18:18:49.402292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.594 qpair failed and we were unable to recover it. 00:32:00.594 [2024-04-15 18:18:49.412043] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.594 [2024-04-15 18:18:49.412187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.594 [2024-04-15 18:18:49.412215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.594 [2024-04-15 18:18:49.412231] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.594 [2024-04-15 18:18:49.412245] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.594 [2024-04-15 18:18:49.412278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.594 qpair failed and we were unable to recover it. 00:32:00.594 [2024-04-15 18:18:49.422079] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.594 [2024-04-15 18:18:49.422219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.594 [2024-04-15 18:18:49.422247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.594 [2024-04-15 18:18:49.422263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.594 [2024-04-15 18:18:49.422277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.594 [2024-04-15 18:18:49.422310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.594 qpair failed and we were unable to recover it. 00:32:00.594 [2024-04-15 18:18:49.432085] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.594 [2024-04-15 18:18:49.432218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.594 [2024-04-15 18:18:49.432246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.594 [2024-04-15 18:18:49.432262] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.594 [2024-04-15 18:18:49.432276] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.594 [2024-04-15 18:18:49.432309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.594 qpair failed and we were unable to recover it. 00:32:00.594 [2024-04-15 18:18:49.442164] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.594 [2024-04-15 18:18:49.442333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.594 [2024-04-15 18:18:49.442361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.594 [2024-04-15 18:18:49.442377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.594 [2024-04-15 18:18:49.442391] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.595 [2024-04-15 18:18:49.442423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.595 qpair failed and we were unable to recover it. 00:32:00.595 [2024-04-15 18:18:49.452192] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.595 [2024-04-15 18:18:49.452379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.595 [2024-04-15 18:18:49.452407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.595 [2024-04-15 18:18:49.452424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.595 [2024-04-15 18:18:49.452437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.595 [2024-04-15 18:18:49.452470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.595 qpair failed and we were unable to recover it. 00:32:00.595 [2024-04-15 18:18:49.462223] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.595 [2024-04-15 18:18:49.462362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.595 [2024-04-15 18:18:49.462390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.595 [2024-04-15 18:18:49.462407] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.595 [2024-04-15 18:18:49.462421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.595 [2024-04-15 18:18:49.462455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.595 qpair failed and we were unable to recover it. 00:32:00.595 [2024-04-15 18:18:49.472214] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.595 [2024-04-15 18:18:49.472354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.595 [2024-04-15 18:18:49.472381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.595 [2024-04-15 18:18:49.472398] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.595 [2024-04-15 18:18:49.472412] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.595 [2024-04-15 18:18:49.472445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.595 qpair failed and we were unable to recover it. 00:32:00.595 [2024-04-15 18:18:49.482368] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.595 [2024-04-15 18:18:49.482511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.595 [2024-04-15 18:18:49.482539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.595 [2024-04-15 18:18:49.482555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.595 [2024-04-15 18:18:49.482569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.595 [2024-04-15 18:18:49.482601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.595 qpair failed and we were unable to recover it. 00:32:00.595 [2024-04-15 18:18:49.492270] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.595 [2024-04-15 18:18:49.492408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.595 [2024-04-15 18:18:49.492437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.595 [2024-04-15 18:18:49.492459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.595 [2024-04-15 18:18:49.492474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.595 [2024-04-15 18:18:49.492507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.595 qpair failed and we were unable to recover it. 00:32:00.595 [2024-04-15 18:18:49.502329] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.595 [2024-04-15 18:18:49.502464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.595 [2024-04-15 18:18:49.502492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.595 [2024-04-15 18:18:49.502508] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.595 [2024-04-15 18:18:49.502522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.595 [2024-04-15 18:18:49.502555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.595 qpair failed and we were unable to recover it. 00:32:00.595 [2024-04-15 18:18:49.512335] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.595 [2024-04-15 18:18:49.512493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.595 [2024-04-15 18:18:49.512521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.595 [2024-04-15 18:18:49.512538] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.595 [2024-04-15 18:18:49.512552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.595 [2024-04-15 18:18:49.512585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.595 qpair failed and we were unable to recover it. 00:32:00.595 [2024-04-15 18:18:49.522506] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.595 [2024-04-15 18:18:49.522661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.595 [2024-04-15 18:18:49.522688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.595 [2024-04-15 18:18:49.522704] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.595 [2024-04-15 18:18:49.522719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.595 [2024-04-15 18:18:49.522752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.595 qpair failed and we were unable to recover it. 00:32:00.595 [2024-04-15 18:18:49.532404] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.595 [2024-04-15 18:18:49.532549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.595 [2024-04-15 18:18:49.532577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.595 [2024-04-15 18:18:49.532593] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.595 [2024-04-15 18:18:49.532607] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.595 [2024-04-15 18:18:49.532640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.595 qpair failed and we were unable to recover it. 00:32:00.595 [2024-04-15 18:18:49.542428] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.595 [2024-04-15 18:18:49.542571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.595 [2024-04-15 18:18:49.542601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.595 [2024-04-15 18:18:49.542617] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.595 [2024-04-15 18:18:49.542631] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.595 [2024-04-15 18:18:49.542664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.595 qpair failed and we were unable to recover it. 00:32:00.855 [2024-04-15 18:18:49.552517] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.855 [2024-04-15 18:18:49.552686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.855 [2024-04-15 18:18:49.552716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.855 [2024-04-15 18:18:49.552733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.855 [2024-04-15 18:18:49.552747] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.855 [2024-04-15 18:18:49.552781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.855 qpair failed and we were unable to recover it. 00:32:00.855 [2024-04-15 18:18:49.562508] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.855 [2024-04-15 18:18:49.562656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.855 [2024-04-15 18:18:49.562684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.855 [2024-04-15 18:18:49.562699] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.855 [2024-04-15 18:18:49.562713] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.855 [2024-04-15 18:18:49.562747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.855 qpair failed and we were unable to recover it. 00:32:00.855 [2024-04-15 18:18:49.572509] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.855 [2024-04-15 18:18:49.572648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.855 [2024-04-15 18:18:49.572675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.855 [2024-04-15 18:18:49.572690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.855 [2024-04-15 18:18:49.572704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.855 [2024-04-15 18:18:49.572737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.855 qpair failed and we were unable to recover it. 00:32:00.855 [2024-04-15 18:18:49.582626] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.855 [2024-04-15 18:18:49.582763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.855 [2024-04-15 18:18:49.582792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.855 [2024-04-15 18:18:49.582814] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.855 [2024-04-15 18:18:49.582829] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.855 [2024-04-15 18:18:49.582862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.855 qpair failed and we were unable to recover it. 00:32:00.855 [2024-04-15 18:18:49.592567] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.855 [2024-04-15 18:18:49.592737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.855 [2024-04-15 18:18:49.592765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.855 [2024-04-15 18:18:49.592781] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.855 [2024-04-15 18:18:49.592795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.855 [2024-04-15 18:18:49.592827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.855 qpair failed and we were unable to recover it. 00:32:00.855 [2024-04-15 18:18:49.602599] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.855 [2024-04-15 18:18:49.602740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.855 [2024-04-15 18:18:49.602768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.855 [2024-04-15 18:18:49.602784] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.855 [2024-04-15 18:18:49.602798] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.855 [2024-04-15 18:18:49.602831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.855 qpair failed and we were unable to recover it. 00:32:00.855 [2024-04-15 18:18:49.612635] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.855 [2024-04-15 18:18:49.612768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.855 [2024-04-15 18:18:49.612797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.855 [2024-04-15 18:18:49.612813] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.855 [2024-04-15 18:18:49.612827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.855 [2024-04-15 18:18:49.612859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.855 qpair failed and we were unable to recover it. 00:32:00.855 [2024-04-15 18:18:49.622708] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.855 [2024-04-15 18:18:49.622846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.855 [2024-04-15 18:18:49.622874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.855 [2024-04-15 18:18:49.622890] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.855 [2024-04-15 18:18:49.622905] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.855 [2024-04-15 18:18:49.622937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.855 qpair failed and we were unable to recover it. 00:32:00.855 [2024-04-15 18:18:49.632701] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.855 [2024-04-15 18:18:49.632832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.855 [2024-04-15 18:18:49.632862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.855 [2024-04-15 18:18:49.632878] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.855 [2024-04-15 18:18:49.632892] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.855 [2024-04-15 18:18:49.632924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.855 qpair failed and we were unable to recover it. 00:32:00.855 [2024-04-15 18:18:49.642814] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.855 [2024-04-15 18:18:49.642974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.855 [2024-04-15 18:18:49.643003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.855 [2024-04-15 18:18:49.643019] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.855 [2024-04-15 18:18:49.643033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.855 [2024-04-15 18:18:49.643073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.855 qpair failed and we were unable to recover it. 00:32:00.855 [2024-04-15 18:18:49.652778] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.855 [2024-04-15 18:18:49.652910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.855 [2024-04-15 18:18:49.652937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.855 [2024-04-15 18:18:49.652954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.855 [2024-04-15 18:18:49.652968] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.855 [2024-04-15 18:18:49.653000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.855 qpair failed and we were unable to recover it. 00:32:00.855 [2024-04-15 18:18:49.662877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.855 [2024-04-15 18:18:49.663009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.855 [2024-04-15 18:18:49.663037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.855 [2024-04-15 18:18:49.663053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.855 [2024-04-15 18:18:49.663075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.855 [2024-04-15 18:18:49.663110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.855 qpair failed and we were unable to recover it. 00:32:00.855 [2024-04-15 18:18:49.672801] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.855 [2024-04-15 18:18:49.672936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.855 [2024-04-15 18:18:49.672969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.855 [2024-04-15 18:18:49.672986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.855 [2024-04-15 18:18:49.673000] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.855 [2024-04-15 18:18:49.673032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.855 qpair failed and we were unable to recover it. 00:32:00.855 [2024-04-15 18:18:49.682848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.855 [2024-04-15 18:18:49.682991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.855 [2024-04-15 18:18:49.683019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.855 [2024-04-15 18:18:49.683035] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.855 [2024-04-15 18:18:49.683050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.855 [2024-04-15 18:18:49.683091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.855 qpair failed and we were unable to recover it. 00:32:00.855 [2024-04-15 18:18:49.692877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.855 [2024-04-15 18:18:49.693016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.855 [2024-04-15 18:18:49.693044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.855 [2024-04-15 18:18:49.693066] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.855 [2024-04-15 18:18:49.693082] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.855 [2024-04-15 18:18:49.693118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.855 qpair failed and we were unable to recover it. 00:32:00.856 [2024-04-15 18:18:49.702917] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.856 [2024-04-15 18:18:49.703053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.856 [2024-04-15 18:18:49.703088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.856 [2024-04-15 18:18:49.703105] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.856 [2024-04-15 18:18:49.703119] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.856 [2024-04-15 18:18:49.703153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.856 qpair failed and we were unable to recover it. 00:32:00.856 [2024-04-15 18:18:49.712944] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.856 [2024-04-15 18:18:49.713084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.856 [2024-04-15 18:18:49.713112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.856 [2024-04-15 18:18:49.713129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.856 [2024-04-15 18:18:49.713143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.856 [2024-04-15 18:18:49.713182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.856 qpair failed and we were unable to recover it. 00:32:00.856 [2024-04-15 18:18:49.722971] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.856 [2024-04-15 18:18:49.723114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.856 [2024-04-15 18:18:49.723143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.856 [2024-04-15 18:18:49.723158] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.856 [2024-04-15 18:18:49.723173] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.856 [2024-04-15 18:18:49.723205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.856 qpair failed and we were unable to recover it. 00:32:00.856 [2024-04-15 18:18:49.732979] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.856 [2024-04-15 18:18:49.733117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.856 [2024-04-15 18:18:49.733145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.856 [2024-04-15 18:18:49.733161] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.856 [2024-04-15 18:18:49.733175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.856 [2024-04-15 18:18:49.733208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.856 qpair failed and we were unable to recover it. 00:32:00.856 [2024-04-15 18:18:49.743034] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.856 [2024-04-15 18:18:49.743182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.856 [2024-04-15 18:18:49.743210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.856 [2024-04-15 18:18:49.743226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.856 [2024-04-15 18:18:49.743240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.856 [2024-04-15 18:18:49.743273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.856 qpair failed and we were unable to recover it. 00:32:00.856 [2024-04-15 18:18:49.753083] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.856 [2024-04-15 18:18:49.753216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.856 [2024-04-15 18:18:49.753244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.856 [2024-04-15 18:18:49.753260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.856 [2024-04-15 18:18:49.753275] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.856 [2024-04-15 18:18:49.753308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.856 qpair failed and we were unable to recover it. 00:32:00.856 [2024-04-15 18:18:49.763215] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.856 [2024-04-15 18:18:49.763378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.856 [2024-04-15 18:18:49.763412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.856 [2024-04-15 18:18:49.763429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.856 [2024-04-15 18:18:49.763443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.856 [2024-04-15 18:18:49.763476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.856 qpair failed and we were unable to recover it. 00:32:00.856 [2024-04-15 18:18:49.773181] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.856 [2024-04-15 18:18:49.773326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.856 [2024-04-15 18:18:49.773355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.856 [2024-04-15 18:18:49.773371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.856 [2024-04-15 18:18:49.773385] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.856 [2024-04-15 18:18:49.773418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.856 qpair failed and we were unable to recover it. 00:32:00.856 [2024-04-15 18:18:49.783220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.856 [2024-04-15 18:18:49.783377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.856 [2024-04-15 18:18:49.783405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.856 [2024-04-15 18:18:49.783421] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.856 [2024-04-15 18:18:49.783435] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.856 [2024-04-15 18:18:49.783468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.856 qpair failed and we were unable to recover it. 00:32:00.856 [2024-04-15 18:18:49.793229] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.856 [2024-04-15 18:18:49.793393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.856 [2024-04-15 18:18:49.793421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.856 [2024-04-15 18:18:49.793437] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.856 [2024-04-15 18:18:49.793451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.856 [2024-04-15 18:18:49.793484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.856 qpair failed and we were unable to recover it. 00:32:00.856 [2024-04-15 18:18:49.803324] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:00.856 [2024-04-15 18:18:49.803470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:00.856 [2024-04-15 18:18:49.803499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:00.856 [2024-04-15 18:18:49.803515] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.856 [2024-04-15 18:18:49.803535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:00.856 [2024-04-15 18:18:49.803570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.856 qpair failed and we were unable to recover it. 00:32:01.116 [2024-04-15 18:18:49.813215] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.116 [2024-04-15 18:18:49.813388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.116 [2024-04-15 18:18:49.813417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.116 [2024-04-15 18:18:49.813433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.116 [2024-04-15 18:18:49.813447] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.116 [2024-04-15 18:18:49.813481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.116 qpair failed and we were unable to recover it. 00:32:01.116 [2024-04-15 18:18:49.823244] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.116 [2024-04-15 18:18:49.823392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.116 [2024-04-15 18:18:49.823421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.116 [2024-04-15 18:18:49.823437] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.116 [2024-04-15 18:18:49.823451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.116 [2024-04-15 18:18:49.823483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.116 qpair failed and we were unable to recover it. 00:32:01.116 [2024-04-15 18:18:49.833262] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.116 [2024-04-15 18:18:49.833420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.116 [2024-04-15 18:18:49.833448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.116 [2024-04-15 18:18:49.833464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.116 [2024-04-15 18:18:49.833478] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.116 [2024-04-15 18:18:49.833511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.116 qpair failed and we were unable to recover it. 00:32:01.116 [2024-04-15 18:18:49.843340] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.116 [2024-04-15 18:18:49.843484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.116 [2024-04-15 18:18:49.843512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.116 [2024-04-15 18:18:49.843528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.116 [2024-04-15 18:18:49.843542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.116 [2024-04-15 18:18:49.843575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.116 qpair failed and we were unable to recover it. 00:32:01.116 [2024-04-15 18:18:49.853322] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.116 [2024-04-15 18:18:49.853464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.116 [2024-04-15 18:18:49.853493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.116 [2024-04-15 18:18:49.853509] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.116 [2024-04-15 18:18:49.853524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.116 [2024-04-15 18:18:49.853556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.116 qpair failed and we were unable to recover it. 00:32:01.116 [2024-04-15 18:18:49.863369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.116 [2024-04-15 18:18:49.863506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.116 [2024-04-15 18:18:49.863535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.116 [2024-04-15 18:18:49.863551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.116 [2024-04-15 18:18:49.863565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.116 [2024-04-15 18:18:49.863598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.116 qpair failed and we were unable to recover it. 00:32:01.116 [2024-04-15 18:18:49.873390] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.116 [2024-04-15 18:18:49.873539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.116 [2024-04-15 18:18:49.873567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.116 [2024-04-15 18:18:49.873583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.116 [2024-04-15 18:18:49.873597] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.116 [2024-04-15 18:18:49.873630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.116 qpair failed and we were unable to recover it. 00:32:01.116 [2024-04-15 18:18:49.883461] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.116 [2024-04-15 18:18:49.883615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.116 [2024-04-15 18:18:49.883642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.116 [2024-04-15 18:18:49.883659] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.116 [2024-04-15 18:18:49.883673] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.116 [2024-04-15 18:18:49.883705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.116 qpair failed and we were unable to recover it. 00:32:01.116 [2024-04-15 18:18:49.893467] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.116 [2024-04-15 18:18:49.893606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.116 [2024-04-15 18:18:49.893634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.116 [2024-04-15 18:18:49.893659] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.116 [2024-04-15 18:18:49.893674] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.116 [2024-04-15 18:18:49.893708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.116 qpair failed and we were unable to recover it. 00:32:01.116 [2024-04-15 18:18:49.903503] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.116 [2024-04-15 18:18:49.903648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.116 [2024-04-15 18:18:49.903676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.116 [2024-04-15 18:18:49.903692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.116 [2024-04-15 18:18:49.903707] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.116 [2024-04-15 18:18:49.903740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.116 qpair failed and we were unable to recover it. 00:32:01.116 [2024-04-15 18:18:49.913511] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.116 [2024-04-15 18:18:49.913646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.116 [2024-04-15 18:18:49.913673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.116 [2024-04-15 18:18:49.913689] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.116 [2024-04-15 18:18:49.913704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.116 [2024-04-15 18:18:49.913736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.116 qpair failed and we were unable to recover it. 00:32:01.116 [2024-04-15 18:18:49.923560] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.116 [2024-04-15 18:18:49.923702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.116 [2024-04-15 18:18:49.923730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.117 [2024-04-15 18:18:49.923746] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.117 [2024-04-15 18:18:49.923760] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.117 [2024-04-15 18:18:49.923793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.117 qpair failed and we were unable to recover it. 00:32:01.117 [2024-04-15 18:18:49.933566] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.117 [2024-04-15 18:18:49.933741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.117 [2024-04-15 18:18:49.933769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.117 [2024-04-15 18:18:49.933786] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.117 [2024-04-15 18:18:49.933800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.117 [2024-04-15 18:18:49.933833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.117 qpair failed and we were unable to recover it. 00:32:01.117 [2024-04-15 18:18:49.943584] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.117 [2024-04-15 18:18:49.943728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.117 [2024-04-15 18:18:49.943756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.117 [2024-04-15 18:18:49.943773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.117 [2024-04-15 18:18:49.943788] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.117 [2024-04-15 18:18:49.943820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.117 qpair failed and we were unable to recover it. 00:32:01.117 [2024-04-15 18:18:49.953743] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.117 [2024-04-15 18:18:49.953923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.117 [2024-04-15 18:18:49.953956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.117 [2024-04-15 18:18:49.953972] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.117 [2024-04-15 18:18:49.953987] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.117 [2024-04-15 18:18:49.954019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.117 qpair failed and we were unable to recover it. 00:32:01.117 [2024-04-15 18:18:49.963667] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.117 [2024-04-15 18:18:49.963814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.117 [2024-04-15 18:18:49.963841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.117 [2024-04-15 18:18:49.963857] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.117 [2024-04-15 18:18:49.963871] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.117 [2024-04-15 18:18:49.963904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.117 qpair failed and we were unable to recover it. 00:32:01.117 [2024-04-15 18:18:49.973721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.117 [2024-04-15 18:18:49.973862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.117 [2024-04-15 18:18:49.973890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.117 [2024-04-15 18:18:49.973906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.117 [2024-04-15 18:18:49.973920] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.117 [2024-04-15 18:18:49.973952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.117 qpair failed and we were unable to recover it. 00:32:01.117 [2024-04-15 18:18:49.983756] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.117 [2024-04-15 18:18:49.983891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.117 [2024-04-15 18:18:49.983918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.117 [2024-04-15 18:18:49.983941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.117 [2024-04-15 18:18:49.983956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.117 [2024-04-15 18:18:49.983989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.117 qpair failed and we were unable to recover it. 00:32:01.117 [2024-04-15 18:18:49.993743] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.117 [2024-04-15 18:18:49.993883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.117 [2024-04-15 18:18:49.993911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.117 [2024-04-15 18:18:49.993927] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.117 [2024-04-15 18:18:49.993941] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.117 [2024-04-15 18:18:49.993975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.117 qpair failed and we were unable to recover it. 00:32:01.117 [2024-04-15 18:18:50.003808] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.117 [2024-04-15 18:18:50.003992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.117 [2024-04-15 18:18:50.004020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.117 [2024-04-15 18:18:50.004036] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.117 [2024-04-15 18:18:50.004050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.117 [2024-04-15 18:18:50.004091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.117 qpair failed and we were unable to recover it. 00:32:01.117 [2024-04-15 18:18:50.013900] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.117 [2024-04-15 18:18:50.014098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.117 [2024-04-15 18:18:50.014130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.117 [2024-04-15 18:18:50.014147] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.117 [2024-04-15 18:18:50.014162] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.117 [2024-04-15 18:18:50.014197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.117 qpair failed and we were unable to recover it. 00:32:01.117 [2024-04-15 18:18:50.023850] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.117 [2024-04-15 18:18:50.023989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.117 [2024-04-15 18:18:50.024019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.117 [2024-04-15 18:18:50.024035] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.117 [2024-04-15 18:18:50.024049] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.117 [2024-04-15 18:18:50.024092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.117 qpair failed and we were unable to recover it. 00:32:01.117 [2024-04-15 18:18:50.033894] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.117 [2024-04-15 18:18:50.034030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.117 [2024-04-15 18:18:50.034066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.117 [2024-04-15 18:18:50.034085] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.117 [2024-04-15 18:18:50.034099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.117 [2024-04-15 18:18:50.034133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.117 qpair failed and we were unable to recover it. 00:32:01.117 [2024-04-15 18:18:50.043948] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.117 [2024-04-15 18:18:50.044103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.117 [2024-04-15 18:18:50.044136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.117 [2024-04-15 18:18:50.044153] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.117 [2024-04-15 18:18:50.044168] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.117 [2024-04-15 18:18:50.044203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.117 qpair failed and we were unable to recover it. 00:32:01.117 [2024-04-15 18:18:50.053960] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.117 [2024-04-15 18:18:50.054120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.117 [2024-04-15 18:18:50.054149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.117 [2024-04-15 18:18:50.054166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.117 [2024-04-15 18:18:50.054181] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.118 [2024-04-15 18:18:50.054214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.118 qpair failed and we were unable to recover it. 00:32:01.118 [2024-04-15 18:18:50.064028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.118 [2024-04-15 18:18:50.064237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.118 [2024-04-15 18:18:50.064267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.118 [2024-04-15 18:18:50.064285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.118 [2024-04-15 18:18:50.064308] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.118 [2024-04-15 18:18:50.064342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.118 qpair failed and we were unable to recover it. 00:32:01.377 [2024-04-15 18:18:50.074003] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.377 [2024-04-15 18:18:50.074141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.377 [2024-04-15 18:18:50.074180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.377 [2024-04-15 18:18:50.074198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.377 [2024-04-15 18:18:50.074212] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.377 [2024-04-15 18:18:50.074247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.377 qpair failed and we were unable to recover it. 00:32:01.377 [2024-04-15 18:18:50.084045] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.377 [2024-04-15 18:18:50.084206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.377 [2024-04-15 18:18:50.084235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.377 [2024-04-15 18:18:50.084252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.377 [2024-04-15 18:18:50.084267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.377 [2024-04-15 18:18:50.084300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.377 qpair failed and we were unable to recover it. 00:32:01.377 [2024-04-15 18:18:50.094053] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.377 [2024-04-15 18:18:50.094199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.377 [2024-04-15 18:18:50.094228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.377 [2024-04-15 18:18:50.094244] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.377 [2024-04-15 18:18:50.094258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.377 [2024-04-15 18:18:50.094292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.377 qpair failed and we were unable to recover it. 00:32:01.377 [2024-04-15 18:18:50.104085] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.377 [2024-04-15 18:18:50.104220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.377 [2024-04-15 18:18:50.104250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.377 [2024-04-15 18:18:50.104266] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.377 [2024-04-15 18:18:50.104280] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.377 [2024-04-15 18:18:50.104314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.377 qpair failed and we were unable to recover it. 00:32:01.377 [2024-04-15 18:18:50.114115] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.377 [2024-04-15 18:18:50.114250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.377 [2024-04-15 18:18:50.114279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.377 [2024-04-15 18:18:50.114296] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.377 [2024-04-15 18:18:50.114310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.377 [2024-04-15 18:18:50.114349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.377 qpair failed and we were unable to recover it. 00:32:01.378 [2024-04-15 18:18:50.124167] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.378 [2024-04-15 18:18:50.124329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.378 [2024-04-15 18:18:50.124357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.378 [2024-04-15 18:18:50.124374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.378 [2024-04-15 18:18:50.124387] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.378 [2024-04-15 18:18:50.124422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.378 qpair failed and we were unable to recover it. 00:32:01.378 [2024-04-15 18:18:50.134166] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.378 [2024-04-15 18:18:50.134301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.378 [2024-04-15 18:18:50.134330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.378 [2024-04-15 18:18:50.134346] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.378 [2024-04-15 18:18:50.134360] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.378 [2024-04-15 18:18:50.134393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.378 qpair failed and we were unable to recover it. 00:32:01.378 [2024-04-15 18:18:50.144216] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.378 [2024-04-15 18:18:50.144389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.378 [2024-04-15 18:18:50.144418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.378 [2024-04-15 18:18:50.144434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.378 [2024-04-15 18:18:50.144448] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.378 [2024-04-15 18:18:50.144481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.378 qpair failed and we were unable to recover it. 00:32:01.378 [2024-04-15 18:18:50.154202] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.378 [2024-04-15 18:18:50.154337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.378 [2024-04-15 18:18:50.154366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.378 [2024-04-15 18:18:50.154383] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.378 [2024-04-15 18:18:50.154397] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.378 [2024-04-15 18:18:50.154431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.378 qpair failed and we were unable to recover it. 00:32:01.378 [2024-04-15 18:18:50.164271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.378 [2024-04-15 18:18:50.164427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.378 [2024-04-15 18:18:50.164462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.378 [2024-04-15 18:18:50.164479] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.378 [2024-04-15 18:18:50.164494] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.378 [2024-04-15 18:18:50.164527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.378 qpair failed and we were unable to recover it. 00:32:01.378 [2024-04-15 18:18:50.174292] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.378 [2024-04-15 18:18:50.174441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.378 [2024-04-15 18:18:50.174470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.378 [2024-04-15 18:18:50.174486] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.378 [2024-04-15 18:18:50.174500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.378 [2024-04-15 18:18:50.174533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.378 qpair failed and we were unable to recover it. 00:32:01.378 [2024-04-15 18:18:50.184422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.378 [2024-04-15 18:18:50.184607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.378 [2024-04-15 18:18:50.184636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.378 [2024-04-15 18:18:50.184651] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.378 [2024-04-15 18:18:50.184665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.378 [2024-04-15 18:18:50.184698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.378 qpair failed and we were unable to recover it. 00:32:01.378 [2024-04-15 18:18:50.194352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.378 [2024-04-15 18:18:50.194522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.378 [2024-04-15 18:18:50.194550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.378 [2024-04-15 18:18:50.194566] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.378 [2024-04-15 18:18:50.194580] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.378 [2024-04-15 18:18:50.194615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.378 qpair failed and we were unable to recover it. 00:32:01.378 [2024-04-15 18:18:50.204400] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.378 [2024-04-15 18:18:50.204553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.378 [2024-04-15 18:18:50.204583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.378 [2024-04-15 18:18:50.204599] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.378 [2024-04-15 18:18:50.204620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.378 [2024-04-15 18:18:50.204654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.378 qpair failed and we were unable to recover it. 00:32:01.378 [2024-04-15 18:18:50.214372] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.378 [2024-04-15 18:18:50.214509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.378 [2024-04-15 18:18:50.214538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.378 [2024-04-15 18:18:50.214555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.378 [2024-04-15 18:18:50.214569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.378 [2024-04-15 18:18:50.214603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.378 qpair failed and we were unable to recover it. 00:32:01.378 [2024-04-15 18:18:50.224447] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.378 [2024-04-15 18:18:50.224610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.378 [2024-04-15 18:18:50.224639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.378 [2024-04-15 18:18:50.224656] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.378 [2024-04-15 18:18:50.224670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.378 [2024-04-15 18:18:50.224704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.378 qpair failed and we were unable to recover it. 00:32:01.378 [2024-04-15 18:18:50.234446] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.378 [2024-04-15 18:18:50.234623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.378 [2024-04-15 18:18:50.234652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.378 [2024-04-15 18:18:50.234669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.378 [2024-04-15 18:18:50.234683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.378 [2024-04-15 18:18:50.234716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.378 qpair failed and we were unable to recover it. 00:32:01.378 [2024-04-15 18:18:50.244479] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.378 [2024-04-15 18:18:50.244618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.378 [2024-04-15 18:18:50.244647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.378 [2024-04-15 18:18:50.244664] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.378 [2024-04-15 18:18:50.244680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.378 [2024-04-15 18:18:50.244712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.378 qpair failed and we were unable to recover it. 00:32:01.378 [2024-04-15 18:18:50.254528] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.378 [2024-04-15 18:18:50.254683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.378 [2024-04-15 18:18:50.254713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.379 [2024-04-15 18:18:50.254729] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.379 [2024-04-15 18:18:50.254743] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.379 [2024-04-15 18:18:50.254776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.379 qpair failed and we were unable to recover it. 00:32:01.379 [2024-04-15 18:18:50.264563] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.379 [2024-04-15 18:18:50.264702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.379 [2024-04-15 18:18:50.264731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.379 [2024-04-15 18:18:50.264756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.379 [2024-04-15 18:18:50.264771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.379 [2024-04-15 18:18:50.264804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.379 qpair failed and we were unable to recover it. 00:32:01.379 [2024-04-15 18:18:50.274556] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.379 [2024-04-15 18:18:50.274689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.379 [2024-04-15 18:18:50.274719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.379 [2024-04-15 18:18:50.274735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.379 [2024-04-15 18:18:50.274749] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.379 [2024-04-15 18:18:50.274784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.379 qpair failed and we were unable to recover it. 00:32:01.379 [2024-04-15 18:18:50.284591] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.379 [2024-04-15 18:18:50.284731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.379 [2024-04-15 18:18:50.284761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.379 [2024-04-15 18:18:50.284777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.379 [2024-04-15 18:18:50.284792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.379 [2024-04-15 18:18:50.284824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.379 qpair failed and we were unable to recover it. 00:32:01.379 [2024-04-15 18:18:50.294607] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.379 [2024-04-15 18:18:50.294747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.379 [2024-04-15 18:18:50.294775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.379 [2024-04-15 18:18:50.294791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.379 [2024-04-15 18:18:50.294812] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.379 [2024-04-15 18:18:50.294846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.379 qpair failed and we were unable to recover it. 00:32:01.379 [2024-04-15 18:18:50.304646] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.379 [2024-04-15 18:18:50.304807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.379 [2024-04-15 18:18:50.304837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.379 [2024-04-15 18:18:50.304853] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.379 [2024-04-15 18:18:50.304867] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.379 [2024-04-15 18:18:50.304900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.379 qpair failed and we were unable to recover it. 00:32:01.379 [2024-04-15 18:18:50.314692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.379 [2024-04-15 18:18:50.314826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.379 [2024-04-15 18:18:50.314854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.379 [2024-04-15 18:18:50.314871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.379 [2024-04-15 18:18:50.314885] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.379 [2024-04-15 18:18:50.314918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.379 qpair failed and we were unable to recover it. 00:32:01.379 [2024-04-15 18:18:50.324712] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.379 [2024-04-15 18:18:50.324867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.379 [2024-04-15 18:18:50.324896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.379 [2024-04-15 18:18:50.324912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.379 [2024-04-15 18:18:50.324926] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.379 [2024-04-15 18:18:50.324975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.379 qpair failed and we were unable to recover it. 00:32:01.639 [2024-04-15 18:18:50.334748] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.639 [2024-04-15 18:18:50.334920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.639 [2024-04-15 18:18:50.334951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.639 [2024-04-15 18:18:50.334968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.639 [2024-04-15 18:18:50.334982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.639 [2024-04-15 18:18:50.335016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.639 qpair failed and we were unable to recover it. 00:32:01.639 [2024-04-15 18:18:50.344727] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.639 [2024-04-15 18:18:50.344867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.639 [2024-04-15 18:18:50.344896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.639 [2024-04-15 18:18:50.344913] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.639 [2024-04-15 18:18:50.344927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.639 [2024-04-15 18:18:50.344960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.639 qpair failed and we were unable to recover it. 00:32:01.639 [2024-04-15 18:18:50.354783] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.639 [2024-04-15 18:18:50.354916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.639 [2024-04-15 18:18:50.354945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.639 [2024-04-15 18:18:50.354962] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.639 [2024-04-15 18:18:50.354976] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.639 [2024-04-15 18:18:50.355009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.639 qpair failed and we were unable to recover it. 00:32:01.639 [2024-04-15 18:18:50.364867] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.639 [2024-04-15 18:18:50.365088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.639 [2024-04-15 18:18:50.365117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.639 [2024-04-15 18:18:50.365133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.639 [2024-04-15 18:18:50.365147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.639 [2024-04-15 18:18:50.365181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.639 qpair failed and we were unable to recover it. 00:32:01.639 [2024-04-15 18:18:50.374839] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.639 [2024-04-15 18:18:50.374976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.639 [2024-04-15 18:18:50.375005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.639 [2024-04-15 18:18:50.375022] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.639 [2024-04-15 18:18:50.375035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.639 [2024-04-15 18:18:50.375077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.639 qpair failed and we were unable to recover it. 00:32:01.639 [2024-04-15 18:18:50.384993] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.639 [2024-04-15 18:18:50.385173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.639 [2024-04-15 18:18:50.385203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.639 [2024-04-15 18:18:50.385225] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.639 [2024-04-15 18:18:50.385240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.639 [2024-04-15 18:18:50.385273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.639 qpair failed and we were unable to recover it. 00:32:01.639 [2024-04-15 18:18:50.394952] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.639 [2024-04-15 18:18:50.395095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.639 [2024-04-15 18:18:50.395124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.639 [2024-04-15 18:18:50.395140] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.639 [2024-04-15 18:18:50.395154] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.639 [2024-04-15 18:18:50.395187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.639 qpair failed and we were unable to recover it. 00:32:01.639 [2024-04-15 18:18:50.405031] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.639 [2024-04-15 18:18:50.405200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.639 [2024-04-15 18:18:50.405230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.639 [2024-04-15 18:18:50.405247] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.639 [2024-04-15 18:18:50.405261] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.639 [2024-04-15 18:18:50.405295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.639 qpair failed and we were unable to recover it. 00:32:01.639 [2024-04-15 18:18:50.414976] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.639 [2024-04-15 18:18:50.415165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.639 [2024-04-15 18:18:50.415194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.639 [2024-04-15 18:18:50.415210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.639 [2024-04-15 18:18:50.415225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.639 [2024-04-15 18:18:50.415259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.639 qpair failed and we were unable to recover it. 00:32:01.639 [2024-04-15 18:18:50.425045] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.639 [2024-04-15 18:18:50.425215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.639 [2024-04-15 18:18:50.425243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.639 [2024-04-15 18:18:50.425260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.639 [2024-04-15 18:18:50.425273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.639 [2024-04-15 18:18:50.425308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.639 qpair failed and we were unable to recover it. 00:32:01.639 [2024-04-15 18:18:50.435039] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.639 [2024-04-15 18:18:50.435178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.639 [2024-04-15 18:18:50.435208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.639 [2024-04-15 18:18:50.435224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.639 [2024-04-15 18:18:50.435237] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.639 [2024-04-15 18:18:50.435270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.639 qpair failed and we were unable to recover it. 00:32:01.639 [2024-04-15 18:18:50.445089] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.639 [2024-04-15 18:18:50.445260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.639 [2024-04-15 18:18:50.445289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.639 [2024-04-15 18:18:50.445306] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.639 [2024-04-15 18:18:50.445320] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.639 [2024-04-15 18:18:50.445354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.639 qpair failed and we were unable to recover it. 00:32:01.639 [2024-04-15 18:18:50.455122] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.640 [2024-04-15 18:18:50.455279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.640 [2024-04-15 18:18:50.455308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.640 [2024-04-15 18:18:50.455324] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.640 [2024-04-15 18:18:50.455338] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.640 [2024-04-15 18:18:50.455371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.640 qpair failed and we were unable to recover it. 00:32:01.640 [2024-04-15 18:18:50.465127] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.640 [2024-04-15 18:18:50.465268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.640 [2024-04-15 18:18:50.465297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.640 [2024-04-15 18:18:50.465313] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.640 [2024-04-15 18:18:50.465327] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.640 [2024-04-15 18:18:50.465361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.640 qpair failed and we were unable to recover it. 00:32:01.640 [2024-04-15 18:18:50.475129] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.640 [2024-04-15 18:18:50.475261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.640 [2024-04-15 18:18:50.475296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.640 [2024-04-15 18:18:50.475314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.640 [2024-04-15 18:18:50.475328] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.640 [2024-04-15 18:18:50.475361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.640 qpair failed and we were unable to recover it. 00:32:01.640 [2024-04-15 18:18:50.485183] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.640 [2024-04-15 18:18:50.485325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.640 [2024-04-15 18:18:50.485355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.640 [2024-04-15 18:18:50.485371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.640 [2024-04-15 18:18:50.485386] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.640 [2024-04-15 18:18:50.485419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.640 qpair failed and we were unable to recover it. 00:32:01.640 [2024-04-15 18:18:50.495314] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.640 [2024-04-15 18:18:50.495493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.640 [2024-04-15 18:18:50.495522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.640 [2024-04-15 18:18:50.495538] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.640 [2024-04-15 18:18:50.495552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.640 [2024-04-15 18:18:50.495585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.640 qpair failed and we were unable to recover it. 00:32:01.640 [2024-04-15 18:18:50.505250] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.640 [2024-04-15 18:18:50.505385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.640 [2024-04-15 18:18:50.505414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.640 [2024-04-15 18:18:50.505430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.640 [2024-04-15 18:18:50.505443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.640 [2024-04-15 18:18:50.505477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.640 qpair failed and we were unable to recover it. 00:32:01.640 [2024-04-15 18:18:50.515307] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.640 [2024-04-15 18:18:50.515478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.640 [2024-04-15 18:18:50.515506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.640 [2024-04-15 18:18:50.515523] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.640 [2024-04-15 18:18:50.515537] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.640 [2024-04-15 18:18:50.515576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.640 qpair failed and we were unable to recover it. 00:32:01.640 [2024-04-15 18:18:50.525307] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.640 [2024-04-15 18:18:50.525475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.640 [2024-04-15 18:18:50.525504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.640 [2024-04-15 18:18:50.525521] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.640 [2024-04-15 18:18:50.525534] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.640 [2024-04-15 18:18:50.525569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.640 qpair failed and we were unable to recover it. 00:32:01.640 [2024-04-15 18:18:50.535292] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.640 [2024-04-15 18:18:50.535432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.640 [2024-04-15 18:18:50.535461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.640 [2024-04-15 18:18:50.535477] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.640 [2024-04-15 18:18:50.535491] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.640 [2024-04-15 18:18:50.535525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.640 qpair failed and we were unable to recover it. 00:32:01.640 [2024-04-15 18:18:50.545324] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.640 [2024-04-15 18:18:50.545460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.640 [2024-04-15 18:18:50.545489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.640 [2024-04-15 18:18:50.545506] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.640 [2024-04-15 18:18:50.545520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.640 [2024-04-15 18:18:50.545553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.640 qpair failed and we were unable to recover it. 00:32:01.640 [2024-04-15 18:18:50.555363] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.640 [2024-04-15 18:18:50.555501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.640 [2024-04-15 18:18:50.555529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.640 [2024-04-15 18:18:50.555545] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.640 [2024-04-15 18:18:50.555559] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.640 [2024-04-15 18:18:50.555592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.640 qpair failed and we were unable to recover it. 00:32:01.640 [2024-04-15 18:18:50.565408] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.640 [2024-04-15 18:18:50.565553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.640 [2024-04-15 18:18:50.565588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.640 [2024-04-15 18:18:50.565605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.640 [2024-04-15 18:18:50.565619] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.640 [2024-04-15 18:18:50.565652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.640 qpair failed and we were unable to recover it. 00:32:01.640 [2024-04-15 18:18:50.575535] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.640 [2024-04-15 18:18:50.575681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.640 [2024-04-15 18:18:50.575707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.640 [2024-04-15 18:18:50.575723] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.640 [2024-04-15 18:18:50.575737] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.640 [2024-04-15 18:18:50.575770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.640 qpair failed and we were unable to recover it. 00:32:01.640 [2024-04-15 18:18:50.585516] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.640 [2024-04-15 18:18:50.585653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.640 [2024-04-15 18:18:50.585682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.640 [2024-04-15 18:18:50.585698] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.640 [2024-04-15 18:18:50.585712] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.641 [2024-04-15 18:18:50.585745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.641 qpair failed and we were unable to recover it. 00:32:01.900 [2024-04-15 18:18:50.595487] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.900 [2024-04-15 18:18:50.595625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.900 [2024-04-15 18:18:50.595655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.900 [2024-04-15 18:18:50.595672] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.900 [2024-04-15 18:18:50.595686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.900 [2024-04-15 18:18:50.595720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.900 qpair failed and we were unable to recover it. 00:32:01.900 [2024-04-15 18:18:50.605532] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.900 [2024-04-15 18:18:50.605674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.900 [2024-04-15 18:18:50.605703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.900 [2024-04-15 18:18:50.605722] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.900 [2024-04-15 18:18:50.605736] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.900 [2024-04-15 18:18:50.605780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.900 qpair failed and we were unable to recover it. 00:32:01.900 [2024-04-15 18:18:50.615528] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.900 [2024-04-15 18:18:50.615697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.900 [2024-04-15 18:18:50.615725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.900 [2024-04-15 18:18:50.615741] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.900 [2024-04-15 18:18:50.615756] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.900 [2024-04-15 18:18:50.615789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.900 qpair failed and we were unable to recover it. 00:32:01.900 [2024-04-15 18:18:50.625593] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.900 [2024-04-15 18:18:50.625729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.900 [2024-04-15 18:18:50.625757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.900 [2024-04-15 18:18:50.625773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.900 [2024-04-15 18:18:50.625787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.900 [2024-04-15 18:18:50.625821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.901 qpair failed and we were unable to recover it. 00:32:01.901 [2024-04-15 18:18:50.635621] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.901 [2024-04-15 18:18:50.635757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.901 [2024-04-15 18:18:50.635785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.901 [2024-04-15 18:18:50.635802] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.901 [2024-04-15 18:18:50.635815] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.901 [2024-04-15 18:18:50.635848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.901 qpair failed and we were unable to recover it. 00:32:01.901 [2024-04-15 18:18:50.645711] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.901 [2024-04-15 18:18:50.645861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.901 [2024-04-15 18:18:50.645890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.901 [2024-04-15 18:18:50.645907] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.901 [2024-04-15 18:18:50.645921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.901 [2024-04-15 18:18:50.645954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.901 qpair failed and we were unable to recover it. 00:32:01.901 [2024-04-15 18:18:50.655649] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.901 [2024-04-15 18:18:50.655793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.901 [2024-04-15 18:18:50.655822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.901 [2024-04-15 18:18:50.655838] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.901 [2024-04-15 18:18:50.655852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.901 [2024-04-15 18:18:50.655886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.901 qpair failed and we were unable to recover it. 00:32:01.901 [2024-04-15 18:18:50.665709] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.901 [2024-04-15 18:18:50.665849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.901 [2024-04-15 18:18:50.665880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.901 [2024-04-15 18:18:50.665896] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.901 [2024-04-15 18:18:50.665910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.901 [2024-04-15 18:18:50.665943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.901 qpair failed and we were unable to recover it. 00:32:01.901 [2024-04-15 18:18:50.675699] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.901 [2024-04-15 18:18:50.675831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.901 [2024-04-15 18:18:50.675860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.901 [2024-04-15 18:18:50.675876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.901 [2024-04-15 18:18:50.675891] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.901 [2024-04-15 18:18:50.675924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.901 qpair failed and we were unable to recover it. 00:32:01.901 [2024-04-15 18:18:50.685746] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.901 [2024-04-15 18:18:50.685890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.901 [2024-04-15 18:18:50.685919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.901 [2024-04-15 18:18:50.685935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.901 [2024-04-15 18:18:50.685950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.901 [2024-04-15 18:18:50.685983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.901 qpair failed and we were unable to recover it. 00:32:01.901 [2024-04-15 18:18:50.695857] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.901 [2024-04-15 18:18:50.695994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.901 [2024-04-15 18:18:50.696022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.901 [2024-04-15 18:18:50.696039] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.901 [2024-04-15 18:18:50.696066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.901 [2024-04-15 18:18:50.696118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.901 qpair failed and we were unable to recover it. 00:32:01.901 [2024-04-15 18:18:50.705887] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.901 [2024-04-15 18:18:50.706077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.901 [2024-04-15 18:18:50.706107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.901 [2024-04-15 18:18:50.706124] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.901 [2024-04-15 18:18:50.706138] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.901 [2024-04-15 18:18:50.706171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.901 qpair failed and we were unable to recover it. 00:32:01.901 [2024-04-15 18:18:50.715828] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.901 [2024-04-15 18:18:50.715965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.901 [2024-04-15 18:18:50.715995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.901 [2024-04-15 18:18:50.716011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.901 [2024-04-15 18:18:50.716025] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.901 [2024-04-15 18:18:50.716065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.901 qpair failed and we were unable to recover it. 00:32:01.901 [2024-04-15 18:18:50.725916] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.901 [2024-04-15 18:18:50.726122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.901 [2024-04-15 18:18:50.726151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.901 [2024-04-15 18:18:50.726168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.901 [2024-04-15 18:18:50.726182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.901 [2024-04-15 18:18:50.726215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.901 qpair failed and we were unable to recover it. 00:32:01.901 [2024-04-15 18:18:50.735898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.901 [2024-04-15 18:18:50.736093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.901 [2024-04-15 18:18:50.736123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.901 [2024-04-15 18:18:50.736139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.901 [2024-04-15 18:18:50.736153] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.901 [2024-04-15 18:18:50.736187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.901 qpair failed and we were unable to recover it. 00:32:01.901 [2024-04-15 18:18:50.745945] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.901 [2024-04-15 18:18:50.746103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.901 [2024-04-15 18:18:50.746131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.901 [2024-04-15 18:18:50.746148] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.901 [2024-04-15 18:18:50.746162] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.901 [2024-04-15 18:18:50.746195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.901 qpair failed and we were unable to recover it. 00:32:01.901 [2024-04-15 18:18:50.756009] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.901 [2024-04-15 18:18:50.756170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.901 [2024-04-15 18:18:50.756200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.901 [2024-04-15 18:18:50.756216] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.901 [2024-04-15 18:18:50.756230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.901 [2024-04-15 18:18:50.756264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.901 qpair failed and we were unable to recover it. 00:32:01.901 [2024-04-15 18:18:50.766018] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.901 [2024-04-15 18:18:50.766176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.902 [2024-04-15 18:18:50.766206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.902 [2024-04-15 18:18:50.766222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.902 [2024-04-15 18:18:50.766236] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.902 [2024-04-15 18:18:50.766269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.902 qpair failed and we were unable to recover it. 00:32:01.902 [2024-04-15 18:18:50.776111] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.902 [2024-04-15 18:18:50.776301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.902 [2024-04-15 18:18:50.776329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.902 [2024-04-15 18:18:50.776346] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.902 [2024-04-15 18:18:50.776360] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.902 [2024-04-15 18:18:50.776393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.902 qpair failed and we were unable to recover it. 00:32:01.902 [2024-04-15 18:18:50.786041] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.902 [2024-04-15 18:18:50.786189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.902 [2024-04-15 18:18:50.786218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.902 [2024-04-15 18:18:50.786240] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.902 [2024-04-15 18:18:50.786255] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.902 [2024-04-15 18:18:50.786288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.902 qpair failed and we were unable to recover it. 00:32:01.902 [2024-04-15 18:18:50.796130] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.902 [2024-04-15 18:18:50.796292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.902 [2024-04-15 18:18:50.796321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.902 [2024-04-15 18:18:50.796337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.902 [2024-04-15 18:18:50.796351] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.902 [2024-04-15 18:18:50.796385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.902 qpair failed and we were unable to recover it. 00:32:01.902 [2024-04-15 18:18:50.806208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.902 [2024-04-15 18:18:50.806349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.902 [2024-04-15 18:18:50.806377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.902 [2024-04-15 18:18:50.806393] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.902 [2024-04-15 18:18:50.806407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.902 [2024-04-15 18:18:50.806440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.902 qpair failed and we were unable to recover it. 00:32:01.902 [2024-04-15 18:18:50.816158] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.902 [2024-04-15 18:18:50.816307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.902 [2024-04-15 18:18:50.816337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.902 [2024-04-15 18:18:50.816353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.902 [2024-04-15 18:18:50.816368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.902 [2024-04-15 18:18:50.816401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.902 qpair failed and we were unable to recover it. 00:32:01.902 [2024-04-15 18:18:50.826192] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.902 [2024-04-15 18:18:50.826336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.902 [2024-04-15 18:18:50.826364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.902 [2024-04-15 18:18:50.826381] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.902 [2024-04-15 18:18:50.826395] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.902 [2024-04-15 18:18:50.826428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.902 qpair failed and we were unable to recover it. 00:32:01.902 [2024-04-15 18:18:50.836190] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.902 [2024-04-15 18:18:50.836321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.902 [2024-04-15 18:18:50.836350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.902 [2024-04-15 18:18:50.836367] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.902 [2024-04-15 18:18:50.836381] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.902 [2024-04-15 18:18:50.836414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.902 qpair failed and we were unable to recover it. 00:32:01.902 [2024-04-15 18:18:50.846251] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.902 [2024-04-15 18:18:50.846389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.902 [2024-04-15 18:18:50.846417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.902 [2024-04-15 18:18:50.846434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.902 [2024-04-15 18:18:50.846448] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:01.902 [2024-04-15 18:18:50.846482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:01.902 qpair failed and we were unable to recover it. 00:32:02.162 [2024-04-15 18:18:50.856255] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.162 [2024-04-15 18:18:50.856393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.162 [2024-04-15 18:18:50.856424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.162 [2024-04-15 18:18:50.856441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.162 [2024-04-15 18:18:50.856455] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.162 [2024-04-15 18:18:50.856489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.162 qpair failed and we were unable to recover it. 00:32:02.162 [2024-04-15 18:18:50.866266] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.162 [2024-04-15 18:18:50.866430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.162 [2024-04-15 18:18:50.866459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.162 [2024-04-15 18:18:50.866476] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.162 [2024-04-15 18:18:50.866490] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.162 [2024-04-15 18:18:50.866524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.162 qpair failed and we were unable to recover it. 00:32:02.163 [2024-04-15 18:18:50.876387] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.163 [2024-04-15 18:18:50.876538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.163 [2024-04-15 18:18:50.876572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.163 [2024-04-15 18:18:50.876590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.163 [2024-04-15 18:18:50.876605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.163 [2024-04-15 18:18:50.876638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.163 qpair failed and we were unable to recover it. 00:32:02.163 [2024-04-15 18:18:50.886333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.163 [2024-04-15 18:18:50.886476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.163 [2024-04-15 18:18:50.886505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.163 [2024-04-15 18:18:50.886522] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.163 [2024-04-15 18:18:50.886536] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.163 [2024-04-15 18:18:50.886569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.163 qpair failed and we were unable to recover it. 00:32:02.163 [2024-04-15 18:18:50.896354] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.163 [2024-04-15 18:18:50.896487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.163 [2024-04-15 18:18:50.896516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.163 [2024-04-15 18:18:50.896533] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.163 [2024-04-15 18:18:50.896546] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.163 [2024-04-15 18:18:50.896579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.163 qpair failed and we were unable to recover it. 00:32:02.163 [2024-04-15 18:18:50.906373] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.163 [2024-04-15 18:18:50.906514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.163 [2024-04-15 18:18:50.906543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.163 [2024-04-15 18:18:50.906560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.163 [2024-04-15 18:18:50.906574] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.163 [2024-04-15 18:18:50.906608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.163 qpair failed and we were unable to recover it. 00:32:02.163 [2024-04-15 18:18:50.916394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.163 [2024-04-15 18:18:50.916525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.163 [2024-04-15 18:18:50.916554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.163 [2024-04-15 18:18:50.916570] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.163 [2024-04-15 18:18:50.916585] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.163 [2024-04-15 18:18:50.916624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.163 qpair failed and we were unable to recover it. 00:32:02.163 [2024-04-15 18:18:50.926462] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.163 [2024-04-15 18:18:50.926601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.163 [2024-04-15 18:18:50.926629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.163 [2024-04-15 18:18:50.926646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.163 [2024-04-15 18:18:50.926659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.163 [2024-04-15 18:18:50.926692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.163 qpair failed and we were unable to recover it. 00:32:02.163 [2024-04-15 18:18:50.936459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.163 [2024-04-15 18:18:50.936661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.163 [2024-04-15 18:18:50.936689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.163 [2024-04-15 18:18:50.936705] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.163 [2024-04-15 18:18:50.936719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.163 [2024-04-15 18:18:50.936752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.163 qpair failed and we were unable to recover it. 00:32:02.163 [2024-04-15 18:18:50.946512] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.163 [2024-04-15 18:18:50.946650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.163 [2024-04-15 18:18:50.946679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.163 [2024-04-15 18:18:50.946695] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.163 [2024-04-15 18:18:50.946709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.163 [2024-04-15 18:18:50.946742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.163 qpair failed and we were unable to recover it. 00:32:02.163 [2024-04-15 18:18:50.956514] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.163 [2024-04-15 18:18:50.956648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.163 [2024-04-15 18:18:50.956677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.163 [2024-04-15 18:18:50.956693] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.163 [2024-04-15 18:18:50.956707] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.163 [2024-04-15 18:18:50.956740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.163 qpair failed and we were unable to recover it. 00:32:02.163 [2024-04-15 18:18:50.966599] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.163 [2024-04-15 18:18:50.966745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.163 [2024-04-15 18:18:50.966779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.163 [2024-04-15 18:18:50.966797] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.163 [2024-04-15 18:18:50.966811] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.163 [2024-04-15 18:18:50.966844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.163 qpair failed and we were unable to recover it. 00:32:02.163 [2024-04-15 18:18:50.976583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.163 [2024-04-15 18:18:50.976717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.163 [2024-04-15 18:18:50.976746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.163 [2024-04-15 18:18:50.976763] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.163 [2024-04-15 18:18:50.976777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.163 [2024-04-15 18:18:50.976810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.163 qpair failed and we were unable to recover it. 00:32:02.163 [2024-04-15 18:18:50.986580] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.163 [2024-04-15 18:18:50.986721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.163 [2024-04-15 18:18:50.986750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.163 [2024-04-15 18:18:50.986767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.163 [2024-04-15 18:18:50.986781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.163 [2024-04-15 18:18:50.986814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.163 qpair failed and we were unable to recover it. 00:32:02.163 [2024-04-15 18:18:50.996625] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.163 [2024-04-15 18:18:50.996809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.163 [2024-04-15 18:18:50.996838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.163 [2024-04-15 18:18:50.996855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.163 [2024-04-15 18:18:50.996869] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.163 [2024-04-15 18:18:50.996902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.163 qpair failed and we were unable to recover it. 00:32:02.163 [2024-04-15 18:18:51.006665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.163 [2024-04-15 18:18:51.006806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.163 [2024-04-15 18:18:51.006835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.164 [2024-04-15 18:18:51.006852] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.164 [2024-04-15 18:18:51.006867] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.164 [2024-04-15 18:18:51.006906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.164 qpair failed and we were unable to recover it. 00:32:02.164 [2024-04-15 18:18:51.016679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.164 [2024-04-15 18:18:51.016859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.164 [2024-04-15 18:18:51.016887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.164 [2024-04-15 18:18:51.016904] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.164 [2024-04-15 18:18:51.016918] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.164 [2024-04-15 18:18:51.016951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.164 qpair failed and we were unable to recover it. 00:32:02.164 [2024-04-15 18:18:51.026717] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.164 [2024-04-15 18:18:51.026899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.164 [2024-04-15 18:18:51.026927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.164 [2024-04-15 18:18:51.026944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.164 [2024-04-15 18:18:51.026958] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.164 [2024-04-15 18:18:51.026991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.164 qpair failed and we were unable to recover it. 00:32:02.164 [2024-04-15 18:18:51.036732] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.164 [2024-04-15 18:18:51.036886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.164 [2024-04-15 18:18:51.036916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.164 [2024-04-15 18:18:51.036933] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.164 [2024-04-15 18:18:51.036947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.164 [2024-04-15 18:18:51.036980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.164 qpair failed and we were unable to recover it. 00:32:02.164 [2024-04-15 18:18:51.046790] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.164 [2024-04-15 18:18:51.046930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.164 [2024-04-15 18:18:51.046959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.164 [2024-04-15 18:18:51.046975] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.164 [2024-04-15 18:18:51.046990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.164 [2024-04-15 18:18:51.047023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.164 qpair failed and we were unable to recover it. 00:32:02.164 [2024-04-15 18:18:51.056795] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.164 [2024-04-15 18:18:51.056933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.164 [2024-04-15 18:18:51.056966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.164 [2024-04-15 18:18:51.056984] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.164 [2024-04-15 18:18:51.056998] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.164 [2024-04-15 18:18:51.057032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.164 qpair failed and we were unable to recover it. 00:32:02.164 [2024-04-15 18:18:51.066829] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.164 [2024-04-15 18:18:51.066971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.164 [2024-04-15 18:18:51.067000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.164 [2024-04-15 18:18:51.067016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.164 [2024-04-15 18:18:51.067030] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.164 [2024-04-15 18:18:51.067070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.164 qpair failed and we were unable to recover it. 00:32:02.164 [2024-04-15 18:18:51.076851] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.164 [2024-04-15 18:18:51.076984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.164 [2024-04-15 18:18:51.077013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.164 [2024-04-15 18:18:51.077029] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.164 [2024-04-15 18:18:51.077044] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.164 [2024-04-15 18:18:51.077090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.164 qpair failed and we were unable to recover it. 00:32:02.164 [2024-04-15 18:18:51.086929] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.164 [2024-04-15 18:18:51.087076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.164 [2024-04-15 18:18:51.087106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.164 [2024-04-15 18:18:51.087122] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.164 [2024-04-15 18:18:51.087136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.164 [2024-04-15 18:18:51.087171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.164 qpair failed and we were unable to recover it. 00:32:02.164 [2024-04-15 18:18:51.096896] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.164 [2024-04-15 18:18:51.097091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.164 [2024-04-15 18:18:51.097120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.164 [2024-04-15 18:18:51.097137] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.164 [2024-04-15 18:18:51.097157] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.164 [2024-04-15 18:18:51.097193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.164 qpair failed and we were unable to recover it. 00:32:02.164 [2024-04-15 18:18:51.106932] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.164 [2024-04-15 18:18:51.107081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.164 [2024-04-15 18:18:51.107110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.164 [2024-04-15 18:18:51.107127] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.164 [2024-04-15 18:18:51.107141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.164 [2024-04-15 18:18:51.107175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.164 qpair failed and we were unable to recover it. 00:32:02.425 [2024-04-15 18:18:51.116959] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.425 [2024-04-15 18:18:51.117106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.425 [2024-04-15 18:18:51.117137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.425 [2024-04-15 18:18:51.117154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.425 [2024-04-15 18:18:51.117169] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.425 [2024-04-15 18:18:51.117204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.425 qpair failed and we were unable to recover it. 00:32:02.425 [2024-04-15 18:18:51.126998] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.425 [2024-04-15 18:18:51.127149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.425 [2024-04-15 18:18:51.127179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.425 [2024-04-15 18:18:51.127195] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.425 [2024-04-15 18:18:51.127209] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.425 [2024-04-15 18:18:51.127243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.425 qpair failed and we were unable to recover it. 00:32:02.425 [2024-04-15 18:18:51.137013] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.425 [2024-04-15 18:18:51.137156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.425 [2024-04-15 18:18:51.137186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.425 [2024-04-15 18:18:51.137203] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.425 [2024-04-15 18:18:51.137217] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.425 [2024-04-15 18:18:51.137251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.425 qpair failed and we were unable to recover it. 00:32:02.425 [2024-04-15 18:18:51.147111] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.425 [2024-04-15 18:18:51.147270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.425 [2024-04-15 18:18:51.147298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.425 [2024-04-15 18:18:51.147314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.425 [2024-04-15 18:18:51.147329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.425 [2024-04-15 18:18:51.147362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.425 qpair failed and we were unable to recover it. 00:32:02.425 [2024-04-15 18:18:51.157100] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.425 [2024-04-15 18:18:51.157231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.425 [2024-04-15 18:18:51.157260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.425 [2024-04-15 18:18:51.157276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.425 [2024-04-15 18:18:51.157290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.425 [2024-04-15 18:18:51.157324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.425 qpair failed and we were unable to recover it. 00:32:02.425 [2024-04-15 18:18:51.167126] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.425 [2024-04-15 18:18:51.167275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.425 [2024-04-15 18:18:51.167304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.425 [2024-04-15 18:18:51.167320] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.425 [2024-04-15 18:18:51.167334] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.425 [2024-04-15 18:18:51.167368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.425 qpair failed and we were unable to recover it. 00:32:02.425 [2024-04-15 18:18:51.177137] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.425 [2024-04-15 18:18:51.177279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.425 [2024-04-15 18:18:51.177309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.425 [2024-04-15 18:18:51.177326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.425 [2024-04-15 18:18:51.177340] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.425 [2024-04-15 18:18:51.177374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.425 qpair failed and we were unable to recover it. 00:32:02.425 [2024-04-15 18:18:51.187218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.425 [2024-04-15 18:18:51.187376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.425 [2024-04-15 18:18:51.187405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.425 [2024-04-15 18:18:51.187428] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.425 [2024-04-15 18:18:51.187443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.425 [2024-04-15 18:18:51.187477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.425 qpair failed and we were unable to recover it. 00:32:02.425 [2024-04-15 18:18:51.197177] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.425 [2024-04-15 18:18:51.197359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.425 [2024-04-15 18:18:51.197389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.425 [2024-04-15 18:18:51.197405] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.425 [2024-04-15 18:18:51.197419] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.425 [2024-04-15 18:18:51.197452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.425 qpair failed and we were unable to recover it. 00:32:02.425 [2024-04-15 18:18:51.207244] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.425 [2024-04-15 18:18:51.207403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.425 [2024-04-15 18:18:51.207432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.425 [2024-04-15 18:18:51.207448] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.425 [2024-04-15 18:18:51.207462] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.425 [2024-04-15 18:18:51.207495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.425 qpair failed and we were unable to recover it. 00:32:02.425 [2024-04-15 18:18:51.217374] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.425 [2024-04-15 18:18:51.217512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.425 [2024-04-15 18:18:51.217540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.425 [2024-04-15 18:18:51.217556] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.425 [2024-04-15 18:18:51.217570] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.425 [2024-04-15 18:18:51.217603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.425 qpair failed and we were unable to recover it. 00:32:02.425 [2024-04-15 18:18:51.227326] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.426 [2024-04-15 18:18:51.227458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.426 [2024-04-15 18:18:51.227487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.426 [2024-04-15 18:18:51.227503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.426 [2024-04-15 18:18:51.227517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.426 [2024-04-15 18:18:51.227550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.426 qpair failed and we were unable to recover it. 00:32:02.426 [2024-04-15 18:18:51.237294] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.426 [2024-04-15 18:18:51.237472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.426 [2024-04-15 18:18:51.237501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.426 [2024-04-15 18:18:51.237517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.426 [2024-04-15 18:18:51.237531] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.426 [2024-04-15 18:18:51.237564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.426 qpair failed and we were unable to recover it. 00:32:02.426 [2024-04-15 18:18:51.247360] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.426 [2024-04-15 18:18:51.247505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.426 [2024-04-15 18:18:51.247533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.426 [2024-04-15 18:18:51.247549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.426 [2024-04-15 18:18:51.247563] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.426 [2024-04-15 18:18:51.247596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.426 qpair failed and we were unable to recover it. 00:32:02.426 [2024-04-15 18:18:51.257339] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.426 [2024-04-15 18:18:51.257479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.426 [2024-04-15 18:18:51.257509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.426 [2024-04-15 18:18:51.257525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.426 [2024-04-15 18:18:51.257539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.426 [2024-04-15 18:18:51.257572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.426 qpair failed and we were unable to recover it. 00:32:02.426 [2024-04-15 18:18:51.267381] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.426 [2024-04-15 18:18:51.267511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.426 [2024-04-15 18:18:51.267540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.426 [2024-04-15 18:18:51.267556] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.426 [2024-04-15 18:18:51.267569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.426 [2024-04-15 18:18:51.267603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.426 qpair failed and we were unable to recover it. 00:32:02.426 [2024-04-15 18:18:51.277421] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.426 [2024-04-15 18:18:51.277555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.426 [2024-04-15 18:18:51.277583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.426 [2024-04-15 18:18:51.277605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.426 [2024-04-15 18:18:51.277620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.426 [2024-04-15 18:18:51.277653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.426 qpair failed and we were unable to recover it. 00:32:02.426 [2024-04-15 18:18:51.287448] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.426 [2024-04-15 18:18:51.287590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.426 [2024-04-15 18:18:51.287619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.426 [2024-04-15 18:18:51.287635] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.426 [2024-04-15 18:18:51.287648] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.426 [2024-04-15 18:18:51.287681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.426 qpair failed and we were unable to recover it. 00:32:02.426 [2024-04-15 18:18:51.297453] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.426 [2024-04-15 18:18:51.297598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.426 [2024-04-15 18:18:51.297627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.426 [2024-04-15 18:18:51.297644] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.426 [2024-04-15 18:18:51.297658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.426 [2024-04-15 18:18:51.297691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.426 qpair failed and we were unable to recover it. 00:32:02.426 [2024-04-15 18:18:51.307495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.426 [2024-04-15 18:18:51.307662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.426 [2024-04-15 18:18:51.307691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.426 [2024-04-15 18:18:51.307708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.426 [2024-04-15 18:18:51.307722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.426 [2024-04-15 18:18:51.307755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.426 qpair failed and we were unable to recover it. 00:32:02.426 [2024-04-15 18:18:51.317539] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.426 [2024-04-15 18:18:51.317676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.426 [2024-04-15 18:18:51.317705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.426 [2024-04-15 18:18:51.317721] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.426 [2024-04-15 18:18:51.317735] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.426 [2024-04-15 18:18:51.317768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.426 qpair failed and we were unable to recover it. 00:32:02.426 [2024-04-15 18:18:51.327566] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.426 [2024-04-15 18:18:51.327704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.426 [2024-04-15 18:18:51.327733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.426 [2024-04-15 18:18:51.327750] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.426 [2024-04-15 18:18:51.327764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.426 [2024-04-15 18:18:51.327797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.426 qpair failed and we were unable to recover it. 00:32:02.426 [2024-04-15 18:18:51.337579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.426 [2024-04-15 18:18:51.337720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.426 [2024-04-15 18:18:51.337749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.426 [2024-04-15 18:18:51.337766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.426 [2024-04-15 18:18:51.337780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.426 [2024-04-15 18:18:51.337813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.426 qpair failed and we were unable to recover it. 00:32:02.426 [2024-04-15 18:18:51.347679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.426 [2024-04-15 18:18:51.347860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.426 [2024-04-15 18:18:51.347888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.426 [2024-04-15 18:18:51.347905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.426 [2024-04-15 18:18:51.347919] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.426 [2024-04-15 18:18:51.347953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.426 qpair failed and we were unable to recover it. 00:32:02.426 [2024-04-15 18:18:51.357652] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.426 [2024-04-15 18:18:51.357786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.426 [2024-04-15 18:18:51.357814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.426 [2024-04-15 18:18:51.357830] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.426 [2024-04-15 18:18:51.357844] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.426 [2024-04-15 18:18:51.357876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.426 qpair failed and we were unable to recover it. 00:32:02.426 [2024-04-15 18:18:51.367685] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.426 [2024-04-15 18:18:51.367834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.426 [2024-04-15 18:18:51.367872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.426 [2024-04-15 18:18:51.367889] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.426 [2024-04-15 18:18:51.367903] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.426 [2024-04-15 18:18:51.367937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.426 qpair failed and we were unable to recover it. 00:32:02.686 [2024-04-15 18:18:51.377703] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.686 [2024-04-15 18:18:51.377847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.686 [2024-04-15 18:18:51.377877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.686 [2024-04-15 18:18:51.377893] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.686 [2024-04-15 18:18:51.377908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.686 [2024-04-15 18:18:51.377941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.686 qpair failed and we were unable to recover it. 00:32:02.686 [2024-04-15 18:18:51.387763] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.686 [2024-04-15 18:18:51.387988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.686 [2024-04-15 18:18:51.388017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.686 [2024-04-15 18:18:51.388033] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.686 [2024-04-15 18:18:51.388047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.686 [2024-04-15 18:18:51.388095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.686 qpair failed and we were unable to recover it. 00:32:02.686 [2024-04-15 18:18:51.397757] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.686 [2024-04-15 18:18:51.397886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.686 [2024-04-15 18:18:51.397916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.686 [2024-04-15 18:18:51.397932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.686 [2024-04-15 18:18:51.397946] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.686 [2024-04-15 18:18:51.397980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.686 qpair failed and we were unable to recover it. 00:32:02.686 [2024-04-15 18:18:51.407819] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.686 [2024-04-15 18:18:51.407980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.686 [2024-04-15 18:18:51.408009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.686 [2024-04-15 18:18:51.408026] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.686 [2024-04-15 18:18:51.408040] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.686 [2024-04-15 18:18:51.408093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.686 qpair failed and we were unable to recover it. 00:32:02.686 [2024-04-15 18:18:51.417828] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.686 [2024-04-15 18:18:51.418012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.686 [2024-04-15 18:18:51.418040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.686 [2024-04-15 18:18:51.418056] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.686 [2024-04-15 18:18:51.418079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.686 [2024-04-15 18:18:51.418114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.686 qpair failed and we were unable to recover it. 00:32:02.686 [2024-04-15 18:18:51.427881] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.686 [2024-04-15 18:18:51.428019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.686 [2024-04-15 18:18:51.428048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.686 [2024-04-15 18:18:51.428083] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.686 [2024-04-15 18:18:51.428105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.686 [2024-04-15 18:18:51.428141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.686 qpair failed and we were unable to recover it. 00:32:02.686 [2024-04-15 18:18:51.437849] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.686 [2024-04-15 18:18:51.437988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.686 [2024-04-15 18:18:51.438017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.686 [2024-04-15 18:18:51.438034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.686 [2024-04-15 18:18:51.438048] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.686 [2024-04-15 18:18:51.438089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.686 qpair failed and we were unable to recover it. 00:32:02.686 [2024-04-15 18:18:51.447932] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.686 [2024-04-15 18:18:51.448109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.687 [2024-04-15 18:18:51.448138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.687 [2024-04-15 18:18:51.448154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.687 [2024-04-15 18:18:51.448169] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.687 [2024-04-15 18:18:51.448203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.687 qpair failed and we were unable to recover it. 00:32:02.687 [2024-04-15 18:18:51.457917] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.687 [2024-04-15 18:18:51.458053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.687 [2024-04-15 18:18:51.458100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.687 [2024-04-15 18:18:51.458120] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.687 [2024-04-15 18:18:51.458134] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.687 [2024-04-15 18:18:51.458168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.687 qpair failed and we were unable to recover it. 00:32:02.687 [2024-04-15 18:18:51.467964] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.687 [2024-04-15 18:18:51.468117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.687 [2024-04-15 18:18:51.468148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.687 [2024-04-15 18:18:51.468165] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.687 [2024-04-15 18:18:51.468179] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.687 [2024-04-15 18:18:51.468213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.687 qpair failed and we were unable to recover it. 00:32:02.687 [2024-04-15 18:18:51.477992] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.687 [2024-04-15 18:18:51.478136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.687 [2024-04-15 18:18:51.478166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.687 [2024-04-15 18:18:51.478182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.687 [2024-04-15 18:18:51.478196] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.687 [2024-04-15 18:18:51.478230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.687 qpair failed and we were unable to recover it. 00:32:02.687 [2024-04-15 18:18:51.488042] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.687 [2024-04-15 18:18:51.488191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.687 [2024-04-15 18:18:51.488220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.687 [2024-04-15 18:18:51.488236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.687 [2024-04-15 18:18:51.488250] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.687 [2024-04-15 18:18:51.488284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.687 qpair failed and we were unable to recover it. 00:32:02.687 [2024-04-15 18:18:51.498044] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.687 [2024-04-15 18:18:51.498194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.687 [2024-04-15 18:18:51.498223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.687 [2024-04-15 18:18:51.498240] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.687 [2024-04-15 18:18:51.498259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.687 [2024-04-15 18:18:51.498293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.687 qpair failed and we were unable to recover it. 00:32:02.687 [2024-04-15 18:18:51.508104] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.687 [2024-04-15 18:18:51.508249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.687 [2024-04-15 18:18:51.508278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.687 [2024-04-15 18:18:51.508294] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.687 [2024-04-15 18:18:51.508308] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.687 [2024-04-15 18:18:51.508342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.687 qpair failed and we were unable to recover it. 00:32:02.687 [2024-04-15 18:18:51.518138] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.687 [2024-04-15 18:18:51.518322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.687 [2024-04-15 18:18:51.518351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.687 [2024-04-15 18:18:51.518368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.687 [2024-04-15 18:18:51.518382] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.687 [2024-04-15 18:18:51.518416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.687 qpair failed and we were unable to recover it. 00:32:02.687 [2024-04-15 18:18:51.528168] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.687 [2024-04-15 18:18:51.528322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.687 [2024-04-15 18:18:51.528351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.687 [2024-04-15 18:18:51.528367] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.687 [2024-04-15 18:18:51.528381] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.687 [2024-04-15 18:18:51.528414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.687 qpair failed and we were unable to recover it. 00:32:02.687 [2024-04-15 18:18:51.538232] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.687 [2024-04-15 18:18:51.538375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.687 [2024-04-15 18:18:51.538404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.687 [2024-04-15 18:18:51.538421] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.687 [2024-04-15 18:18:51.538435] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.687 [2024-04-15 18:18:51.538468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.687 qpair failed and we were unable to recover it. 00:32:02.687 [2024-04-15 18:18:51.548223] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.687 [2024-04-15 18:18:51.548367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.687 [2024-04-15 18:18:51.548397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.687 [2024-04-15 18:18:51.548413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.687 [2024-04-15 18:18:51.548427] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.687 [2024-04-15 18:18:51.548460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.687 qpair failed and we were unable to recover it. 00:32:02.687 [2024-04-15 18:18:51.558226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.687 [2024-04-15 18:18:51.558363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.687 [2024-04-15 18:18:51.558393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.687 [2024-04-15 18:18:51.558409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.687 [2024-04-15 18:18:51.558423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.687 [2024-04-15 18:18:51.558456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.687 qpair failed and we were unable to recover it. 00:32:02.687 [2024-04-15 18:18:51.568369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.687 [2024-04-15 18:18:51.568566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.687 [2024-04-15 18:18:51.568594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.687 [2024-04-15 18:18:51.568610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.687 [2024-04-15 18:18:51.568624] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.687 [2024-04-15 18:18:51.568656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.687 qpair failed and we were unable to recover it. 00:32:02.687 [2024-04-15 18:18:51.578290] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.687 [2024-04-15 18:18:51.578435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.687 [2024-04-15 18:18:51.578462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.687 [2024-04-15 18:18:51.578478] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.687 [2024-04-15 18:18:51.578492] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.687 [2024-04-15 18:18:51.578524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.687 qpair failed and we were unable to recover it. 00:32:02.688 [2024-04-15 18:18:51.588322] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.688 [2024-04-15 18:18:51.588465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.688 [2024-04-15 18:18:51.588494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.688 [2024-04-15 18:18:51.588516] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.688 [2024-04-15 18:18:51.588530] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.688 [2024-04-15 18:18:51.588563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.688 qpair failed and we were unable to recover it. 00:32:02.688 [2024-04-15 18:18:51.598358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.688 [2024-04-15 18:18:51.598500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.688 [2024-04-15 18:18:51.598529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.688 [2024-04-15 18:18:51.598545] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.688 [2024-04-15 18:18:51.598559] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.688 [2024-04-15 18:18:51.598591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.688 qpair failed and we were unable to recover it. 00:32:02.688 [2024-04-15 18:18:51.608400] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.688 [2024-04-15 18:18:51.608544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.688 [2024-04-15 18:18:51.608572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.688 [2024-04-15 18:18:51.608588] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.688 [2024-04-15 18:18:51.608602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.688 [2024-04-15 18:18:51.608635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.688 qpair failed and we were unable to recover it. 00:32:02.688 [2024-04-15 18:18:51.618412] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.688 [2024-04-15 18:18:51.618545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.688 [2024-04-15 18:18:51.618574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.688 [2024-04-15 18:18:51.618590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.688 [2024-04-15 18:18:51.618604] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.688 [2024-04-15 18:18:51.618637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.688 qpair failed and we were unable to recover it. 00:32:02.688 [2024-04-15 18:18:51.628430] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.688 [2024-04-15 18:18:51.628570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.688 [2024-04-15 18:18:51.628598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.688 [2024-04-15 18:18:51.628614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.688 [2024-04-15 18:18:51.628628] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.688 [2024-04-15 18:18:51.628661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.688 qpair failed and we were unable to recover it. 00:32:02.948 [2024-04-15 18:18:51.638469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.948 [2024-04-15 18:18:51.638605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.948 [2024-04-15 18:18:51.638635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.948 [2024-04-15 18:18:51.638651] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.948 [2024-04-15 18:18:51.638665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.948 [2024-04-15 18:18:51.638698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.948 qpair failed and we were unable to recover it. 00:32:02.948 [2024-04-15 18:18:51.648553] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.948 [2024-04-15 18:18:51.648692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.948 [2024-04-15 18:18:51.648722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.948 [2024-04-15 18:18:51.648738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.948 [2024-04-15 18:18:51.648752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.948 [2024-04-15 18:18:51.648786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.948 qpair failed and we were unable to recover it. 00:32:02.948 [2024-04-15 18:18:51.658545] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.948 [2024-04-15 18:18:51.658695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.948 [2024-04-15 18:18:51.658724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.948 [2024-04-15 18:18:51.658741] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.948 [2024-04-15 18:18:51.658755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.948 [2024-04-15 18:18:51.658788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.948 qpair failed and we were unable to recover it. 00:32:02.948 [2024-04-15 18:18:51.668541] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.948 [2024-04-15 18:18:51.668695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.948 [2024-04-15 18:18:51.668724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.948 [2024-04-15 18:18:51.668741] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.948 [2024-04-15 18:18:51.668755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.948 [2024-04-15 18:18:51.668789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.948 qpair failed and we were unable to recover it. 00:32:02.948 [2024-04-15 18:18:51.678595] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.948 [2024-04-15 18:18:51.678733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.948 [2024-04-15 18:18:51.678762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.948 [2024-04-15 18:18:51.678785] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.948 [2024-04-15 18:18:51.678800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.948 [2024-04-15 18:18:51.678833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.948 qpair failed and we were unable to recover it. 00:32:02.948 [2024-04-15 18:18:51.688717] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.948 [2024-04-15 18:18:51.688872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.948 [2024-04-15 18:18:51.688901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.948 [2024-04-15 18:18:51.688917] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.948 [2024-04-15 18:18:51.688931] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.948 [2024-04-15 18:18:51.688964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.948 qpair failed and we were unable to recover it. 00:32:02.948 [2024-04-15 18:18:51.698647] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.948 [2024-04-15 18:18:51.698785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.948 [2024-04-15 18:18:51.698814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.948 [2024-04-15 18:18:51.698830] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.948 [2024-04-15 18:18:51.698844] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.948 [2024-04-15 18:18:51.698878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.948 qpair failed and we were unable to recover it. 00:32:02.948 [2024-04-15 18:18:51.708666] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.948 [2024-04-15 18:18:51.708925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.948 [2024-04-15 18:18:51.708954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.948 [2024-04-15 18:18:51.708970] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.948 [2024-04-15 18:18:51.708985] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.948 [2024-04-15 18:18:51.709018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.948 qpair failed and we were unable to recover it. 00:32:02.948 [2024-04-15 18:18:51.718682] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.948 [2024-04-15 18:18:51.718831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.948 [2024-04-15 18:18:51.718859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.948 [2024-04-15 18:18:51.718875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.948 [2024-04-15 18:18:51.718889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.948 [2024-04-15 18:18:51.718923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.948 qpair failed and we were unable to recover it. 00:32:02.948 [2024-04-15 18:18:51.728764] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.948 [2024-04-15 18:18:51.728915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.948 [2024-04-15 18:18:51.728944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.948 [2024-04-15 18:18:51.728960] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.948 [2024-04-15 18:18:51.728974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.948 [2024-04-15 18:18:51.729007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.948 qpair failed and we were unable to recover it. 00:32:02.948 [2024-04-15 18:18:51.738773] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.948 [2024-04-15 18:18:51.738917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.948 [2024-04-15 18:18:51.738945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.948 [2024-04-15 18:18:51.738962] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.948 [2024-04-15 18:18:51.738976] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.948 [2024-04-15 18:18:51.739009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.948 qpair failed and we were unable to recover it. 00:32:02.948 [2024-04-15 18:18:51.748779] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.948 [2024-04-15 18:18:51.748919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.949 [2024-04-15 18:18:51.748947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.949 [2024-04-15 18:18:51.748963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.949 [2024-04-15 18:18:51.748977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.949 [2024-04-15 18:18:51.749010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.949 qpair failed and we were unable to recover it. 00:32:02.949 [2024-04-15 18:18:51.758794] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.949 [2024-04-15 18:18:51.758932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.949 [2024-04-15 18:18:51.758961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.949 [2024-04-15 18:18:51.758977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.949 [2024-04-15 18:18:51.758991] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.949 [2024-04-15 18:18:51.759024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.949 qpair failed and we were unable to recover it. 00:32:02.949 [2024-04-15 18:18:51.768855] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.949 [2024-04-15 18:18:51.769000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.949 [2024-04-15 18:18:51.769034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.949 [2024-04-15 18:18:51.769051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.949 [2024-04-15 18:18:51.769075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.949 [2024-04-15 18:18:51.769109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.949 qpair failed and we were unable to recover it. 00:32:02.949 [2024-04-15 18:18:51.778862] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.949 [2024-04-15 18:18:51.779001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.949 [2024-04-15 18:18:51.779030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.949 [2024-04-15 18:18:51.779046] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.949 [2024-04-15 18:18:51.779067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.949 [2024-04-15 18:18:51.779102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.949 qpair failed and we were unable to recover it. 00:32:02.949 [2024-04-15 18:18:51.788917] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.949 [2024-04-15 18:18:51.789064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.949 [2024-04-15 18:18:51.789093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.949 [2024-04-15 18:18:51.789109] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.949 [2024-04-15 18:18:51.789123] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.949 [2024-04-15 18:18:51.789158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.949 qpair failed and we were unable to recover it. 00:32:02.949 [2024-04-15 18:18:51.798912] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.949 [2024-04-15 18:18:51.799048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.949 [2024-04-15 18:18:51.799085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.949 [2024-04-15 18:18:51.799102] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.949 [2024-04-15 18:18:51.799116] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.949 [2024-04-15 18:18:51.799149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.949 qpair failed and we were unable to recover it. 00:32:02.949 [2024-04-15 18:18:51.808972] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.949 [2024-04-15 18:18:51.809163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.949 [2024-04-15 18:18:51.809193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.949 [2024-04-15 18:18:51.809209] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.949 [2024-04-15 18:18:51.809224] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.949 [2024-04-15 18:18:51.809264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.949 qpair failed and we were unable to recover it. 00:32:02.949 [2024-04-15 18:18:51.818966] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.949 [2024-04-15 18:18:51.819122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.949 [2024-04-15 18:18:51.819151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.949 [2024-04-15 18:18:51.819167] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.949 [2024-04-15 18:18:51.819181] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.949 [2024-04-15 18:18:51.819214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.949 qpair failed and we were unable to recover it. 00:32:02.949 [2024-04-15 18:18:51.828983] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.949 [2024-04-15 18:18:51.829234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.949 [2024-04-15 18:18:51.829264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.949 [2024-04-15 18:18:51.829281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.949 [2024-04-15 18:18:51.829295] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.949 [2024-04-15 18:18:51.829330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.949 qpair failed and we were unable to recover it. 00:32:02.949 [2024-04-15 18:18:51.839115] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.949 [2024-04-15 18:18:51.839310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.949 [2024-04-15 18:18:51.839339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.949 [2024-04-15 18:18:51.839356] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.949 [2024-04-15 18:18:51.839370] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.949 [2024-04-15 18:18:51.839403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.949 qpair failed and we were unable to recover it. 00:32:02.949 [2024-04-15 18:18:51.849076] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.949 [2024-04-15 18:18:51.849217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.949 [2024-04-15 18:18:51.849245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.949 [2024-04-15 18:18:51.849262] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.949 [2024-04-15 18:18:51.849276] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.949 [2024-04-15 18:18:51.849310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.949 qpair failed and we were unable to recover it. 00:32:02.949 [2024-04-15 18:18:51.859119] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.949 [2024-04-15 18:18:51.859279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.949 [2024-04-15 18:18:51.859315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.949 [2024-04-15 18:18:51.859332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.949 [2024-04-15 18:18:51.859346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.949 [2024-04-15 18:18:51.859379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.949 qpair failed and we were unable to recover it. 00:32:02.949 [2024-04-15 18:18:51.869130] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.949 [2024-04-15 18:18:51.869267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.949 [2024-04-15 18:18:51.869296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.949 [2024-04-15 18:18:51.869313] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.950 [2024-04-15 18:18:51.869327] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.950 [2024-04-15 18:18:51.869360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.950 qpair failed and we were unable to recover it. 00:32:02.950 [2024-04-15 18:18:51.879145] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.950 [2024-04-15 18:18:51.879285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.950 [2024-04-15 18:18:51.879313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.950 [2024-04-15 18:18:51.879329] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.950 [2024-04-15 18:18:51.879343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.950 [2024-04-15 18:18:51.879377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.950 qpair failed and we were unable to recover it. 00:32:02.950 [2024-04-15 18:18:51.889184] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.950 [2024-04-15 18:18:51.889353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.950 [2024-04-15 18:18:51.889382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.950 [2024-04-15 18:18:51.889399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.950 [2024-04-15 18:18:51.889413] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.950 [2024-04-15 18:18:51.889446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.950 qpair failed and we were unable to recover it. 00:32:02.950 [2024-04-15 18:18:51.899246] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.950 [2024-04-15 18:18:51.899408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.950 [2024-04-15 18:18:51.899437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.950 [2024-04-15 18:18:51.899454] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.950 [2024-04-15 18:18:51.899474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:02.950 [2024-04-15 18:18:51.899509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:02.950 qpair failed and we were unable to recover it. 00:32:03.211 [2024-04-15 18:18:51.909285] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.211 [2024-04-15 18:18:51.909426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.211 [2024-04-15 18:18:51.909455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.211 [2024-04-15 18:18:51.909472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.211 [2024-04-15 18:18:51.909487] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.211 [2024-04-15 18:18:51.909520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.211 qpair failed and we were unable to recover it. 00:32:03.211 [2024-04-15 18:18:51.919285] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.211 [2024-04-15 18:18:51.919458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.211 [2024-04-15 18:18:51.919486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.211 [2024-04-15 18:18:51.919503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.211 [2024-04-15 18:18:51.919517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.211 [2024-04-15 18:18:51.919550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.211 qpair failed and we were unable to recover it. 00:32:03.211 [2024-04-15 18:18:51.929406] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.211 [2024-04-15 18:18:51.929588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.211 [2024-04-15 18:18:51.929617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.211 [2024-04-15 18:18:51.929633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.211 [2024-04-15 18:18:51.929647] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.211 [2024-04-15 18:18:51.929680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.211 qpair failed and we were unable to recover it. 00:32:03.211 [2024-04-15 18:18:51.939330] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.211 [2024-04-15 18:18:51.939470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.211 [2024-04-15 18:18:51.939499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.211 [2024-04-15 18:18:51.939515] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.211 [2024-04-15 18:18:51.939529] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.211 [2024-04-15 18:18:51.939562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.211 qpair failed and we were unable to recover it. 00:32:03.211 [2024-04-15 18:18:51.949388] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.211 [2024-04-15 18:18:51.949530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.211 [2024-04-15 18:18:51.949559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.211 [2024-04-15 18:18:51.949578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.211 [2024-04-15 18:18:51.949593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.211 [2024-04-15 18:18:51.949627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.211 qpair failed and we were unable to recover it. 00:32:03.211 [2024-04-15 18:18:51.959468] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.211 [2024-04-15 18:18:51.959606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.211 [2024-04-15 18:18:51.959635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.211 [2024-04-15 18:18:51.959652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.211 [2024-04-15 18:18:51.959667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.211 [2024-04-15 18:18:51.959700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.211 qpair failed and we were unable to recover it. 00:32:03.211 [2024-04-15 18:18:51.969432] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.211 [2024-04-15 18:18:51.969573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.211 [2024-04-15 18:18:51.969601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.211 [2024-04-15 18:18:51.969618] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.211 [2024-04-15 18:18:51.969632] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.211 [2024-04-15 18:18:51.969665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.211 qpair failed and we were unable to recover it. 00:32:03.211 [2024-04-15 18:18:51.979500] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.211 [2024-04-15 18:18:51.979659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.211 [2024-04-15 18:18:51.979687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.211 [2024-04-15 18:18:51.979704] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.211 [2024-04-15 18:18:51.979718] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.211 [2024-04-15 18:18:51.979751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.211 qpair failed and we were unable to recover it. 00:32:03.211 [2024-04-15 18:18:51.989538] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.211 [2024-04-15 18:18:51.989710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.211 [2024-04-15 18:18:51.989739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.211 [2024-04-15 18:18:51.989755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.211 [2024-04-15 18:18:51.989775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.211 [2024-04-15 18:18:51.989809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.211 qpair failed and we were unable to recover it. 00:32:03.211 [2024-04-15 18:18:51.999522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.211 [2024-04-15 18:18:51.999659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.211 [2024-04-15 18:18:51.999688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.211 [2024-04-15 18:18:51.999705] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.211 [2024-04-15 18:18:51.999719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.211 [2024-04-15 18:18:51.999752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.211 qpair failed and we were unable to recover it. 00:32:03.211 [2024-04-15 18:18:52.009559] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.211 [2024-04-15 18:18:52.009700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.211 [2024-04-15 18:18:52.009729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.211 [2024-04-15 18:18:52.009745] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.212 [2024-04-15 18:18:52.009760] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.212 [2024-04-15 18:18:52.009792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.212 qpair failed and we were unable to recover it. 00:32:03.212 [2024-04-15 18:18:52.019583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.212 [2024-04-15 18:18:52.019720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.212 [2024-04-15 18:18:52.019749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.212 [2024-04-15 18:18:52.019766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.212 [2024-04-15 18:18:52.019780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.212 [2024-04-15 18:18:52.019813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.212 qpair failed and we were unable to recover it. 00:32:03.212 [2024-04-15 18:18:52.029617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.212 [2024-04-15 18:18:52.029746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.212 [2024-04-15 18:18:52.029775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.212 [2024-04-15 18:18:52.029791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.212 [2024-04-15 18:18:52.029805] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.212 [2024-04-15 18:18:52.029838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.212 qpair failed and we were unable to recover it. 00:32:03.212 [2024-04-15 18:18:52.039603] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.212 [2024-04-15 18:18:52.039746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.212 [2024-04-15 18:18:52.039776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.212 [2024-04-15 18:18:52.039792] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.212 [2024-04-15 18:18:52.039806] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.212 [2024-04-15 18:18:52.039839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.212 qpair failed and we were unable to recover it. 00:32:03.212 [2024-04-15 18:18:52.049662] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.212 [2024-04-15 18:18:52.049815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.212 [2024-04-15 18:18:52.049844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.212 [2024-04-15 18:18:52.049860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.212 [2024-04-15 18:18:52.049875] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.212 [2024-04-15 18:18:52.049908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.212 qpair failed and we were unable to recover it. 00:32:03.212 [2024-04-15 18:18:52.059675] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.212 [2024-04-15 18:18:52.059814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.212 [2024-04-15 18:18:52.059844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.212 [2024-04-15 18:18:52.059860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.212 [2024-04-15 18:18:52.059874] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.212 [2024-04-15 18:18:52.059907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.212 qpair failed and we were unable to recover it. 00:32:03.212 [2024-04-15 18:18:52.069689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.212 [2024-04-15 18:18:52.069823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.212 [2024-04-15 18:18:52.069852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.212 [2024-04-15 18:18:52.069869] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.212 [2024-04-15 18:18:52.069883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.212 [2024-04-15 18:18:52.069916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.212 qpair failed and we were unable to recover it. 00:32:03.212 [2024-04-15 18:18:52.079729] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.212 [2024-04-15 18:18:52.079864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.212 [2024-04-15 18:18:52.079893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.212 [2024-04-15 18:18:52.079923] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.212 [2024-04-15 18:18:52.079939] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.212 [2024-04-15 18:18:52.079972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.212 qpair failed and we were unable to recover it. 00:32:03.212 [2024-04-15 18:18:52.089758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.212 [2024-04-15 18:18:52.089926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.212 [2024-04-15 18:18:52.089955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.212 [2024-04-15 18:18:52.089972] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.212 [2024-04-15 18:18:52.089985] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.212 [2024-04-15 18:18:52.090018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.212 qpair failed and we were unable to recover it. 00:32:03.212 [2024-04-15 18:18:52.099771] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.212 [2024-04-15 18:18:52.099903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.212 [2024-04-15 18:18:52.099932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.212 [2024-04-15 18:18:52.099949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.212 [2024-04-15 18:18:52.099963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.212 [2024-04-15 18:18:52.099995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.212 qpair failed and we were unable to recover it. 00:32:03.212 [2024-04-15 18:18:52.109884] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.212 [2024-04-15 18:18:52.110015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.212 [2024-04-15 18:18:52.110044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.212 [2024-04-15 18:18:52.110070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.212 [2024-04-15 18:18:52.110093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.212 [2024-04-15 18:18:52.110128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.212 qpair failed and we were unable to recover it. 00:32:03.212 [2024-04-15 18:18:52.119885] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.212 [2024-04-15 18:18:52.120053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.212 [2024-04-15 18:18:52.120093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.212 [2024-04-15 18:18:52.120109] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.212 [2024-04-15 18:18:52.120124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.212 [2024-04-15 18:18:52.120156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.212 qpair failed and we were unable to recover it. 00:32:03.212 [2024-04-15 18:18:52.129899] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.212 [2024-04-15 18:18:52.130075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.213 [2024-04-15 18:18:52.130106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.213 [2024-04-15 18:18:52.130123] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.213 [2024-04-15 18:18:52.130136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.213 [2024-04-15 18:18:52.130170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.213 qpair failed and we were unable to recover it. 00:32:03.213 [2024-04-15 18:18:52.139955] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.213 [2024-04-15 18:18:52.140101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.213 [2024-04-15 18:18:52.140129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.213 [2024-04-15 18:18:52.140145] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.213 [2024-04-15 18:18:52.140160] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.213 [2024-04-15 18:18:52.140193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.213 qpair failed and we were unable to recover it. 00:32:03.213 [2024-04-15 18:18:52.149921] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.213 [2024-04-15 18:18:52.150065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.213 [2024-04-15 18:18:52.150094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.213 [2024-04-15 18:18:52.150110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.213 [2024-04-15 18:18:52.150124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.213 [2024-04-15 18:18:52.150158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.213 qpair failed and we were unable to recover it. 00:32:03.213 [2024-04-15 18:18:52.160056] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.213 [2024-04-15 18:18:52.160198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.213 [2024-04-15 18:18:52.160228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.213 [2024-04-15 18:18:52.160245] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.213 [2024-04-15 18:18:52.160271] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.213 [2024-04-15 18:18:52.160316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.213 qpair failed and we were unable to recover it. 00:32:03.484 [2024-04-15 18:18:52.170023] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.485 [2024-04-15 18:18:52.170188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.485 [2024-04-15 18:18:52.170224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.485 [2024-04-15 18:18:52.170242] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.485 [2024-04-15 18:18:52.170257] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.485 [2024-04-15 18:18:52.170291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.485 qpair failed and we were unable to recover it. 00:32:03.485 [2024-04-15 18:18:52.180025] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.485 [2024-04-15 18:18:52.180181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.485 [2024-04-15 18:18:52.180211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.485 [2024-04-15 18:18:52.180228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.485 [2024-04-15 18:18:52.180242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.485 [2024-04-15 18:18:52.180277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.485 qpair failed and we were unable to recover it. 00:32:03.485 [2024-04-15 18:18:52.190019] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.485 [2024-04-15 18:18:52.190212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.485 [2024-04-15 18:18:52.190242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.485 [2024-04-15 18:18:52.190259] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.485 [2024-04-15 18:18:52.190273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.485 [2024-04-15 18:18:52.190306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.485 qpair failed and we were unable to recover it. 00:32:03.485 [2024-04-15 18:18:52.200056] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.485 [2024-04-15 18:18:52.200204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.485 [2024-04-15 18:18:52.200233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.485 [2024-04-15 18:18:52.200250] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.485 [2024-04-15 18:18:52.200264] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.485 [2024-04-15 18:18:52.200298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.485 qpair failed and we were unable to recover it. 00:32:03.485 [2024-04-15 18:18:52.210136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.485 [2024-04-15 18:18:52.210288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.485 [2024-04-15 18:18:52.210317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.485 [2024-04-15 18:18:52.210333] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.485 [2024-04-15 18:18:52.210347] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.485 [2024-04-15 18:18:52.210386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.485 qpair failed and we were unable to recover it. 00:32:03.485 [2024-04-15 18:18:52.220181] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.485 [2024-04-15 18:18:52.220342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.485 [2024-04-15 18:18:52.220371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.485 [2024-04-15 18:18:52.220387] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.485 [2024-04-15 18:18:52.220402] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.485 [2024-04-15 18:18:52.220435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.485 qpair failed and we were unable to recover it. 00:32:03.485 [2024-04-15 18:18:52.230154] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.485 [2024-04-15 18:18:52.230334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.485 [2024-04-15 18:18:52.230363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.485 [2024-04-15 18:18:52.230379] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.485 [2024-04-15 18:18:52.230393] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.485 [2024-04-15 18:18:52.230426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.485 qpair failed and we were unable to recover it. 00:32:03.485 [2024-04-15 18:18:52.240179] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.485 [2024-04-15 18:18:52.240327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.485 [2024-04-15 18:18:52.240356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.485 [2024-04-15 18:18:52.240373] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.485 [2024-04-15 18:18:52.240388] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.485 [2024-04-15 18:18:52.240421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.485 qpair failed and we were unable to recover it. 00:32:03.485 [2024-04-15 18:18:52.250249] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.485 [2024-04-15 18:18:52.250390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.485 [2024-04-15 18:18:52.250419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.485 [2024-04-15 18:18:52.250436] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.485 [2024-04-15 18:18:52.250450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.485 [2024-04-15 18:18:52.250483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.485 qpair failed and we were unable to recover it. 00:32:03.485 [2024-04-15 18:18:52.260251] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.485 [2024-04-15 18:18:52.260393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.485 [2024-04-15 18:18:52.260429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.485 [2024-04-15 18:18:52.260447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.485 [2024-04-15 18:18:52.260461] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.485 [2024-04-15 18:18:52.260494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.485 qpair failed and we were unable to recover it. 00:32:03.485 [2024-04-15 18:18:52.270369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.485 [2024-04-15 18:18:52.270540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.485 [2024-04-15 18:18:52.270570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.485 [2024-04-15 18:18:52.270587] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.485 [2024-04-15 18:18:52.270602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.485 [2024-04-15 18:18:52.270635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.485 qpair failed and we were unable to recover it. 00:32:03.485 [2024-04-15 18:18:52.280320] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.485 [2024-04-15 18:18:52.280467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.485 [2024-04-15 18:18:52.280496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.485 [2024-04-15 18:18:52.280513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.485 [2024-04-15 18:18:52.280527] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.485 [2024-04-15 18:18:52.280560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.485 qpair failed and we were unable to recover it. 00:32:03.485 [2024-04-15 18:18:52.290358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.485 [2024-04-15 18:18:52.290505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.485 [2024-04-15 18:18:52.290540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.485 [2024-04-15 18:18:52.290557] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.485 [2024-04-15 18:18:52.290571] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.485 [2024-04-15 18:18:52.290604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.485 qpair failed and we were unable to recover it. 00:32:03.485 [2024-04-15 18:18:52.300357] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.485 [2024-04-15 18:18:52.300499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.485 [2024-04-15 18:18:52.300528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.485 [2024-04-15 18:18:52.300545] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.485 [2024-04-15 18:18:52.300559] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.485 [2024-04-15 18:18:52.300598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.485 qpair failed and we were unable to recover it. 00:32:03.485 [2024-04-15 18:18:52.310389] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.485 [2024-04-15 18:18:52.310562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.485 [2024-04-15 18:18:52.310590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.485 [2024-04-15 18:18:52.310606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.485 [2024-04-15 18:18:52.310620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.485 [2024-04-15 18:18:52.310653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.485 qpair failed and we were unable to recover it. 00:32:03.485 [2024-04-15 18:18:52.320447] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.485 [2024-04-15 18:18:52.320625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.485 [2024-04-15 18:18:52.320654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.485 [2024-04-15 18:18:52.320670] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.485 [2024-04-15 18:18:52.320683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.485 [2024-04-15 18:18:52.320716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.485 qpair failed and we were unable to recover it. 00:32:03.485 [2024-04-15 18:18:52.330599] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.485 [2024-04-15 18:18:52.330791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.485 [2024-04-15 18:18:52.330820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.485 [2024-04-15 18:18:52.330836] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.485 [2024-04-15 18:18:52.330850] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.485 [2024-04-15 18:18:52.330882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.485 qpair failed and we were unable to recover it. 00:32:03.485 [2024-04-15 18:18:52.340482] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.485 [2024-04-15 18:18:52.340617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.486 [2024-04-15 18:18:52.340646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.486 [2024-04-15 18:18:52.340663] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.486 [2024-04-15 18:18:52.340677] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.486 [2024-04-15 18:18:52.340710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.486 qpair failed and we were unable to recover it. 00:32:03.486 [2024-04-15 18:18:52.350535] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.486 [2024-04-15 18:18:52.350693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.486 [2024-04-15 18:18:52.350722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.486 [2024-04-15 18:18:52.350738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.486 [2024-04-15 18:18:52.350752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.486 [2024-04-15 18:18:52.350786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.486 qpair failed and we were unable to recover it. 00:32:03.486 [2024-04-15 18:18:52.360530] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.486 [2024-04-15 18:18:52.360668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.486 [2024-04-15 18:18:52.360697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.486 [2024-04-15 18:18:52.360713] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.486 [2024-04-15 18:18:52.360728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.486 [2024-04-15 18:18:52.360760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.486 qpair failed and we were unable to recover it. 00:32:03.486 [2024-04-15 18:18:52.370572] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.486 [2024-04-15 18:18:52.370736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.486 [2024-04-15 18:18:52.370766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.486 [2024-04-15 18:18:52.370782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.486 [2024-04-15 18:18:52.370796] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.486 [2024-04-15 18:18:52.370829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.486 qpair failed and we were unable to recover it. 00:32:03.486 [2024-04-15 18:18:52.380597] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.486 [2024-04-15 18:18:52.380774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.486 [2024-04-15 18:18:52.380802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.486 [2024-04-15 18:18:52.380819] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.486 [2024-04-15 18:18:52.380832] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.486 [2024-04-15 18:18:52.380866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.486 qpair failed and we were unable to recover it. 00:32:03.486 [2024-04-15 18:18:52.390699] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.486 [2024-04-15 18:18:52.390837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.486 [2024-04-15 18:18:52.390866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.486 [2024-04-15 18:18:52.390882] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.486 [2024-04-15 18:18:52.390902] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.486 [2024-04-15 18:18:52.390937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.486 qpair failed and we were unable to recover it. 00:32:03.486 [2024-04-15 18:18:52.400659] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.486 [2024-04-15 18:18:52.400792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.486 [2024-04-15 18:18:52.400822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.486 [2024-04-15 18:18:52.400838] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.486 [2024-04-15 18:18:52.400852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.486 [2024-04-15 18:18:52.400885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.486 qpair failed and we were unable to recover it. 00:32:03.486 [2024-04-15 18:18:52.410719] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.486 [2024-04-15 18:18:52.410876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.486 [2024-04-15 18:18:52.410905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.486 [2024-04-15 18:18:52.410921] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.486 [2024-04-15 18:18:52.410935] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.486 [2024-04-15 18:18:52.410968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.486 qpair failed and we were unable to recover it. 00:32:03.486 [2024-04-15 18:18:52.420729] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.486 [2024-04-15 18:18:52.420897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.486 [2024-04-15 18:18:52.420926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.486 [2024-04-15 18:18:52.420942] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.486 [2024-04-15 18:18:52.420956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.486 [2024-04-15 18:18:52.420990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.486 qpair failed and we were unable to recover it. 00:32:03.747 [2024-04-15 18:18:52.430765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.747 [2024-04-15 18:18:52.430910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.747 [2024-04-15 18:18:52.430940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.747 [2024-04-15 18:18:52.430956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.747 [2024-04-15 18:18:52.430971] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.747 [2024-04-15 18:18:52.431005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.747 qpair failed and we were unable to recover it. 00:32:03.747 [2024-04-15 18:18:52.440776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.747 [2024-04-15 18:18:52.440906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.747 [2024-04-15 18:18:52.440944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.747 [2024-04-15 18:18:52.440963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.747 [2024-04-15 18:18:52.440978] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.747 [2024-04-15 18:18:52.441021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.747 qpair failed and we were unable to recover it. 00:32:03.747 [2024-04-15 18:18:52.450819] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.747 [2024-04-15 18:18:52.450964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.747 [2024-04-15 18:18:52.450998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.747 [2024-04-15 18:18:52.451015] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.747 [2024-04-15 18:18:52.451029] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.747 [2024-04-15 18:18:52.451073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.747 qpair failed and we were unable to recover it. 00:32:03.747 [2024-04-15 18:18:52.460844] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.747 [2024-04-15 18:18:52.460982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.747 [2024-04-15 18:18:52.461013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.747 [2024-04-15 18:18:52.461030] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.747 [2024-04-15 18:18:52.461044] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.747 [2024-04-15 18:18:52.461085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.747 qpair failed and we were unable to recover it. 00:32:03.747 [2024-04-15 18:18:52.470886] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.747 [2024-04-15 18:18:52.471096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.747 [2024-04-15 18:18:52.471127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.747 [2024-04-15 18:18:52.471143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.747 [2024-04-15 18:18:52.471157] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.747 [2024-04-15 18:18:52.471192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.747 qpair failed and we were unable to recover it. 00:32:03.747 [2024-04-15 18:18:52.480902] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.747 [2024-04-15 18:18:52.481089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.747 [2024-04-15 18:18:52.481120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.747 [2024-04-15 18:18:52.481142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.747 [2024-04-15 18:18:52.481157] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.747 [2024-04-15 18:18:52.481191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.747 qpair failed and we were unable to recover it. 00:32:03.747 [2024-04-15 18:18:52.490945] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.747 [2024-04-15 18:18:52.491092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.747 [2024-04-15 18:18:52.491122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.747 [2024-04-15 18:18:52.491139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.747 [2024-04-15 18:18:52.491153] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.747 [2024-04-15 18:18:52.491187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.747 qpair failed and we were unable to recover it. 00:32:03.747 [2024-04-15 18:18:52.501030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.747 [2024-04-15 18:18:52.501211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.747 [2024-04-15 18:18:52.501240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.747 [2024-04-15 18:18:52.501257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.747 [2024-04-15 18:18:52.501271] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.747 [2024-04-15 18:18:52.501305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.747 qpair failed and we were unable to recover it. 00:32:03.747 [2024-04-15 18:18:52.510974] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.747 [2024-04-15 18:18:52.511146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.747 [2024-04-15 18:18:52.511175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.747 [2024-04-15 18:18:52.511192] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.747 [2024-04-15 18:18:52.511206] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.747 [2024-04-15 18:18:52.511239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.747 qpair failed and we were unable to recover it. 00:32:03.747 [2024-04-15 18:18:52.521006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.747 [2024-04-15 18:18:52.521189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.747 [2024-04-15 18:18:52.521218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.747 [2024-04-15 18:18:52.521235] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.747 [2024-04-15 18:18:52.521249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.747 [2024-04-15 18:18:52.521283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.747 qpair failed and we were unable to recover it. 00:32:03.747 [2024-04-15 18:18:52.531078] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.747 [2024-04-15 18:18:52.531224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.747 [2024-04-15 18:18:52.531253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.747 [2024-04-15 18:18:52.531269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.748 [2024-04-15 18:18:52.531284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.748 [2024-04-15 18:18:52.531318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.748 qpair failed and we were unable to recover it. 00:32:03.748 [2024-04-15 18:18:52.541078] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.748 [2024-04-15 18:18:52.541223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.748 [2024-04-15 18:18:52.541252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.748 [2024-04-15 18:18:52.541269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.748 [2024-04-15 18:18:52.541283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.748 [2024-04-15 18:18:52.541317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.748 qpair failed and we were unable to recover it. 00:32:03.748 [2024-04-15 18:18:52.551092] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.748 [2024-04-15 18:18:52.551225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.748 [2024-04-15 18:18:52.551254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.748 [2024-04-15 18:18:52.551271] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.748 [2024-04-15 18:18:52.551285] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.748 [2024-04-15 18:18:52.551318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.748 qpair failed and we were unable to recover it. 00:32:03.748 [2024-04-15 18:18:52.561120] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.748 [2024-04-15 18:18:52.561254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.748 [2024-04-15 18:18:52.561284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.748 [2024-04-15 18:18:52.561301] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.748 [2024-04-15 18:18:52.561315] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.748 [2024-04-15 18:18:52.561349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.748 qpair failed and we were unable to recover it. 00:32:03.748 [2024-04-15 18:18:52.571248] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.748 [2024-04-15 18:18:52.571403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.748 [2024-04-15 18:18:52.571438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.748 [2024-04-15 18:18:52.571456] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.748 [2024-04-15 18:18:52.571470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.748 [2024-04-15 18:18:52.571503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.748 qpair failed and we were unable to recover it. 00:32:03.748 [2024-04-15 18:18:52.581182] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.748 [2024-04-15 18:18:52.581320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.748 [2024-04-15 18:18:52.581348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.748 [2024-04-15 18:18:52.581364] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.748 [2024-04-15 18:18:52.581378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.748 [2024-04-15 18:18:52.581411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.748 qpair failed and we were unable to recover it. 00:32:03.748 [2024-04-15 18:18:52.591339] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.748 [2024-04-15 18:18:52.591527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.748 [2024-04-15 18:18:52.591557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.748 [2024-04-15 18:18:52.591573] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.748 [2024-04-15 18:18:52.591587] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.748 [2024-04-15 18:18:52.591620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.748 qpair failed and we were unable to recover it. 00:32:03.748 [2024-04-15 18:18:52.601233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.748 [2024-04-15 18:18:52.601421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.748 [2024-04-15 18:18:52.601450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.748 [2024-04-15 18:18:52.601467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.748 [2024-04-15 18:18:52.601482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.748 [2024-04-15 18:18:52.601515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.748 qpair failed and we were unable to recover it. 00:32:03.748 [2024-04-15 18:18:52.611275] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.748 [2024-04-15 18:18:52.611420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.748 [2024-04-15 18:18:52.611449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.748 [2024-04-15 18:18:52.611465] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.748 [2024-04-15 18:18:52.611479] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.748 [2024-04-15 18:18:52.611519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.748 qpair failed and we were unable to recover it. 00:32:03.748 [2024-04-15 18:18:52.621303] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.748 [2024-04-15 18:18:52.621445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.748 [2024-04-15 18:18:52.621474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.748 [2024-04-15 18:18:52.621490] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.748 [2024-04-15 18:18:52.621504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.748 [2024-04-15 18:18:52.621538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.748 qpair failed and we were unable to recover it. 00:32:03.748 [2024-04-15 18:18:52.631426] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.748 [2024-04-15 18:18:52.631572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.748 [2024-04-15 18:18:52.631601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.748 [2024-04-15 18:18:52.631617] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.748 [2024-04-15 18:18:52.631631] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.748 [2024-04-15 18:18:52.631665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.748 qpair failed and we were unable to recover it. 00:32:03.748 [2024-04-15 18:18:52.641350] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.748 [2024-04-15 18:18:52.641479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.748 [2024-04-15 18:18:52.641508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.748 [2024-04-15 18:18:52.641524] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.748 [2024-04-15 18:18:52.641538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.748 [2024-04-15 18:18:52.641572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.748 qpair failed and we were unable to recover it. 00:32:03.748 [2024-04-15 18:18:52.651415] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.748 [2024-04-15 18:18:52.651589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.748 [2024-04-15 18:18:52.651617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.748 [2024-04-15 18:18:52.651633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.748 [2024-04-15 18:18:52.651647] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.748 [2024-04-15 18:18:52.651681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.748 qpair failed and we were unable to recover it. 00:32:03.748 [2024-04-15 18:18:52.661421] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.748 [2024-04-15 18:18:52.661560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.748 [2024-04-15 18:18:52.661595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.748 [2024-04-15 18:18:52.661612] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.748 [2024-04-15 18:18:52.661626] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.748 [2024-04-15 18:18:52.661660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.748 qpair failed and we were unable to recover it. 00:32:03.748 [2024-04-15 18:18:52.671448] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.749 [2024-04-15 18:18:52.671622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.749 [2024-04-15 18:18:52.671650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.749 [2024-04-15 18:18:52.671667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.749 [2024-04-15 18:18:52.671681] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.749 [2024-04-15 18:18:52.671714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.749 qpair failed and we were unable to recover it. 00:32:03.749 [2024-04-15 18:18:52.681550] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.749 [2024-04-15 18:18:52.681707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.749 [2024-04-15 18:18:52.681735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.749 [2024-04-15 18:18:52.681752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.749 [2024-04-15 18:18:52.681766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.749 [2024-04-15 18:18:52.681799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.749 qpair failed and we were unable to recover it. 00:32:03.749 [2024-04-15 18:18:52.691527] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.749 [2024-04-15 18:18:52.691672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.749 [2024-04-15 18:18:52.691701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.749 [2024-04-15 18:18:52.691717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.749 [2024-04-15 18:18:52.691731] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:03.749 [2024-04-15 18:18:52.691764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.749 qpair failed and we were unable to recover it. 00:32:04.009 [2024-04-15 18:18:52.701597] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.009 [2024-04-15 18:18:52.701760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.009 [2024-04-15 18:18:52.701790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.009 [2024-04-15 18:18:52.701808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.009 [2024-04-15 18:18:52.701822] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72ec000b90 00:32:04.009 [2024-04-15 18:18:52.701862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:04.009 qpair failed and we were unable to recover it. 00:32:04.009 [2024-04-15 18:18:52.711551] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.009 [2024-04-15 18:18:52.711697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.009 [2024-04-15 18:18:52.711731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.009 [2024-04-15 18:18:52.711750] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.009 [2024-04-15 18:18:52.711764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18f8ed0 00:32:04.009 [2024-04-15 18:18:52.711798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:04.009 qpair failed and we were unable to recover it. 00:32:04.009 [2024-04-15 18:18:52.721578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.009 [2024-04-15 18:18:52.721709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.009 [2024-04-15 18:18:52.721739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.009 [2024-04-15 18:18:52.721757] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.009 [2024-04-15 18:18:52.721771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18f8ed0 00:32:04.009 [2024-04-15 18:18:52.721804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:04.009 qpair failed and we were unable to recover it. 00:32:04.009 [2024-04-15 18:18:52.721949] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:32:04.009 A controller has encountered a failure and is being reset. 00:32:04.009 [2024-04-15 18:18:52.722014] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19069d0 (9): Bad file descriptor 00:32:04.009 Controller properly reset. 00:32:04.009 Initializing NVMe Controllers 00:32:04.009 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:04.009 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:04.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:04.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:04.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:04.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:04.009 Initialization complete. Launching workers. 00:32:04.009 Starting thread on core 1 00:32:04.009 Starting thread on core 2 00:32:04.009 Starting thread on core 3 00:32:04.009 Starting thread on core 0 00:32:04.009 18:18:52 -- host/target_disconnect.sh@59 -- # sync 00:32:04.009 00:32:04.009 real 0m10.934s 00:32:04.009 user 0m19.254s 00:32:04.009 sys 0m5.602s 00:32:04.009 18:18:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:04.009 18:18:52 -- common/autotest_common.sh@10 -- # set +x 00:32:04.009 ************************************ 00:32:04.009 END TEST nvmf_target_disconnect_tc2 00:32:04.009 ************************************ 00:32:04.009 18:18:52 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:32:04.009 18:18:52 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:04.009 18:18:52 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:32:04.009 18:18:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:32:04.009 18:18:52 -- nvmf/common.sh@117 -- # sync 00:32:04.009 18:18:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:04.009 18:18:52 -- nvmf/common.sh@120 -- # set +e 00:32:04.009 18:18:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:04.009 18:18:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:04.009 rmmod nvme_tcp 00:32:04.009 rmmod nvme_fabrics 00:32:04.009 rmmod nvme_keyring 00:32:04.009 18:18:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:04.009 18:18:52 -- nvmf/common.sh@124 -- # set -e 00:32:04.009 18:18:52 -- nvmf/common.sh@125 -- # return 0 00:32:04.009 18:18:52 -- nvmf/common.sh@478 -- # '[' -n 3462932 ']' 00:32:04.009 18:18:52 -- nvmf/common.sh@479 -- # killprocess 3462932 00:32:04.009 18:18:52 -- common/autotest_common.sh@936 -- # '[' -z 3462932 ']' 00:32:04.009 18:18:52 -- common/autotest_common.sh@940 -- # kill -0 3462932 00:32:04.009 18:18:52 -- common/autotest_common.sh@941 -- # uname 00:32:04.009 18:18:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:04.009 18:18:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3462932 00:32:04.009 18:18:52 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:32:04.009 18:18:52 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:32:04.009 18:18:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3462932' 00:32:04.009 killing process with pid 3462932 00:32:04.009 18:18:52 -- common/autotest_common.sh@955 -- # kill 3462932 00:32:04.009 18:18:52 -- common/autotest_common.sh@960 -- # wait 3462932 00:32:04.269 18:18:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:32:04.269 18:18:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:32:04.269 18:18:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:32:04.269 18:18:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:04.269 18:18:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:04.269 18:18:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.269 18:18:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:04.269 18:18:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.812 18:18:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:06.812 00:32:06.812 real 0m16.127s 00:32:06.812 user 0m45.377s 00:32:06.812 sys 0m7.814s 00:32:06.812 18:18:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:06.812 18:18:55 -- common/autotest_common.sh@10 -- # set +x 00:32:06.812 ************************************ 00:32:06.812 END TEST nvmf_target_disconnect 00:32:06.812 ************************************ 00:32:06.812 18:18:55 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:32:06.812 18:18:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:06.812 18:18:55 -- common/autotest_common.sh@10 -- # set +x 00:32:06.812 18:18:55 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:32:06.812 00:32:06.812 real 24m7.265s 00:32:06.812 user 66m8.314s 00:32:06.812 sys 6m10.570s 00:32:06.812 18:18:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:06.812 18:18:55 -- common/autotest_common.sh@10 -- # set +x 00:32:06.812 ************************************ 00:32:06.812 END TEST nvmf_tcp 00:32:06.812 ************************************ 00:32:06.812 18:18:55 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:32:06.812 18:18:55 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:06.812 18:18:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:32:06.812 18:18:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:06.812 18:18:55 -- common/autotest_common.sh@10 -- # set +x 00:32:06.812 ************************************ 00:32:06.812 START TEST spdkcli_nvmf_tcp 00:32:06.812 ************************************ 00:32:06.812 18:18:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:06.812 * Looking for test storage... 00:32:06.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:06.812 18:18:55 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:06.812 18:18:55 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:06.812 18:18:55 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:06.812 18:18:55 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:06.812 18:18:55 -- nvmf/common.sh@7 -- # uname -s 00:32:06.812 18:18:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.812 18:18:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.812 18:18:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.812 18:18:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.812 18:18:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:06.812 18:18:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:06.812 18:18:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.812 18:18:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:06.812 18:18:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.812 18:18:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:06.812 18:18:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:06.812 18:18:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:06.812 18:18:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.812 18:18:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:06.812 18:18:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:06.812 18:18:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:06.812 18:18:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:06.812 18:18:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.812 18:18:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.812 18:18:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.812 18:18:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.812 18:18:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.812 18:18:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.812 18:18:55 -- paths/export.sh@5 -- # export PATH 00:32:06.812 18:18:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.812 18:18:55 -- nvmf/common.sh@47 -- # : 0 00:32:06.812 18:18:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:06.812 18:18:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:06.812 18:18:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:06.812 18:18:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.812 18:18:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.812 18:18:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:06.812 18:18:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:06.812 18:18:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:06.812 18:18:55 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:06.812 18:18:55 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:06.812 18:18:55 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:06.812 18:18:55 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:06.812 18:18:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:06.812 18:18:55 -- common/autotest_common.sh@10 -- # set +x 00:32:06.812 18:18:55 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:06.812 18:18:55 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3464045 00:32:06.812 18:18:55 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:06.812 18:18:55 -- spdkcli/common.sh@34 -- # waitforlisten 3464045 00:32:06.812 18:18:55 -- common/autotest_common.sh@817 -- # '[' -z 3464045 ']' 00:32:06.812 18:18:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.812 18:18:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:06.812 18:18:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.812 18:18:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:06.812 18:18:55 -- common/autotest_common.sh@10 -- # set +x 00:32:06.812 [2024-04-15 18:18:55.590905] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:32:06.812 [2024-04-15 18:18:55.590995] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3464045 ] 00:32:06.812 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.812 [2024-04-15 18:18:55.664668] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:06.812 [2024-04-15 18:18:55.762511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.812 [2024-04-15 18:18:55.762516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.382 18:18:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:07.382 18:18:56 -- common/autotest_common.sh@850 -- # return 0 00:32:07.382 18:18:56 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:07.382 18:18:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:07.382 18:18:56 -- common/autotest_common.sh@10 -- # set +x 00:32:07.382 18:18:56 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:07.382 18:18:56 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:07.382 18:18:56 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:07.382 18:18:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:07.382 18:18:56 -- common/autotest_common.sh@10 -- # set +x 00:32:07.382 18:18:56 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:07.382 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:07.382 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:07.382 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:07.382 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:07.382 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:07.382 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:07.382 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:07.382 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:07.382 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:07.382 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:07.382 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:07.382 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:07.382 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:07.382 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:07.382 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:07.382 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:07.382 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:07.382 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:07.382 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:07.382 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:07.382 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:07.382 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:07.382 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:07.382 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:07.382 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:07.382 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:07.382 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:07.382 ' 00:32:07.663 [2024-04-15 18:18:56.559419] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:32:10.244 [2024-04-15 18:18:58.745300] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:11.184 [2024-04-15 18:18:59.985677] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:13.724 [2024-04-15 18:19:02.284905] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:15.631 [2024-04-15 18:19:04.262997] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:17.011 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:17.011 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:17.011 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:17.011 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:17.011 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:17.011 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:17.011 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:17.011 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:17.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:17.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:17.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:17.011 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:17.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:17.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:17.011 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:17.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:17.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:17.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:17.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:17.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:17.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:17.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:17.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:17.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:17.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:17.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:17.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:17.011 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:17.011 18:19:05 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:17.011 18:19:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:17.011 18:19:05 -- common/autotest_common.sh@10 -- # set +x 00:32:17.011 18:19:05 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:17.011 18:19:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:17.011 18:19:05 -- common/autotest_common.sh@10 -- # set +x 00:32:17.011 18:19:05 -- spdkcli/nvmf.sh@69 -- # check_match 00:32:17.011 18:19:05 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:17.580 18:19:06 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:17.580 18:19:06 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:17.580 18:19:06 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:17.580 18:19:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:17.580 18:19:06 -- common/autotest_common.sh@10 -- # set +x 00:32:17.839 18:19:06 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:17.839 18:19:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:17.839 18:19:06 -- common/autotest_common.sh@10 -- # set +x 00:32:17.839 18:19:06 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:17.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:17.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:17.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:17.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:17.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:17.839 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:17.839 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:17.839 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:17.839 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:17.839 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:17.839 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:17.839 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:17.839 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:17.839 ' 00:32:23.110 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:23.110 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:23.110 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:23.110 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:23.110 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:23.110 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:23.110 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:23.110 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:23.110 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:23.111 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:23.111 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:23.111 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:23.111 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:23.111 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:23.111 18:19:11 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:23.111 18:19:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:23.111 18:19:11 -- common/autotest_common.sh@10 -- # set +x 00:32:23.111 18:19:11 -- spdkcli/nvmf.sh@90 -- # killprocess 3464045 00:32:23.111 18:19:11 -- common/autotest_common.sh@936 -- # '[' -z 3464045 ']' 00:32:23.111 18:19:11 -- common/autotest_common.sh@940 -- # kill -0 3464045 00:32:23.111 18:19:11 -- common/autotest_common.sh@941 -- # uname 00:32:23.111 18:19:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:23.111 18:19:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3464045 00:32:23.111 18:19:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:23.111 18:19:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:23.111 18:19:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3464045' 00:32:23.111 killing process with pid 3464045 00:32:23.111 18:19:11 -- common/autotest_common.sh@955 -- # kill 3464045 00:32:23.111 [2024-04-15 18:19:11.912679] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:32:23.111 18:19:11 -- common/autotest_common.sh@960 -- # wait 3464045 00:32:23.370 18:19:12 -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:23.370 18:19:12 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:23.370 18:19:12 -- spdkcli/common.sh@13 -- # '[' -n 3464045 ']' 00:32:23.370 18:19:12 -- spdkcli/common.sh@14 -- # killprocess 3464045 00:32:23.370 18:19:12 -- common/autotest_common.sh@936 -- # '[' -z 3464045 ']' 00:32:23.370 18:19:12 -- common/autotest_common.sh@940 -- # kill -0 3464045 00:32:23.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3464045) - No such process 00:32:23.370 18:19:12 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3464045 is not found' 00:32:23.370 Process with pid 3464045 is not found 00:32:23.370 18:19:12 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:23.370 18:19:12 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:23.370 18:19:12 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:23.370 00:32:23.370 real 0m16.708s 00:32:23.370 user 0m35.824s 00:32:23.370 sys 0m0.908s 00:32:23.370 18:19:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:23.370 18:19:12 -- common/autotest_common.sh@10 -- # set +x 00:32:23.370 ************************************ 00:32:23.370 END TEST spdkcli_nvmf_tcp 00:32:23.370 ************************************ 00:32:23.370 18:19:12 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:23.370 18:19:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:32:23.370 18:19:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:23.370 18:19:12 -- common/autotest_common.sh@10 -- # set +x 00:32:23.370 ************************************ 00:32:23.370 START TEST nvmf_identify_passthru 00:32:23.370 ************************************ 00:32:23.370 18:19:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:23.628 * Looking for test storage... 00:32:23.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:23.629 18:19:12 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.629 18:19:12 -- nvmf/common.sh@7 -- # uname -s 00:32:23.629 18:19:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.629 18:19:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.629 18:19:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.629 18:19:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.629 18:19:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:23.629 18:19:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:23.629 18:19:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.629 18:19:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:23.629 18:19:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.629 18:19:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:23.629 18:19:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:23.629 18:19:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:23.629 18:19:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.629 18:19:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:23.629 18:19:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:23.629 18:19:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.629 18:19:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.629 18:19:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.629 18:19:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.629 18:19:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.629 18:19:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.629 18:19:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.629 18:19:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.629 18:19:12 -- paths/export.sh@5 -- # export PATH 00:32:23.629 18:19:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.629 18:19:12 -- nvmf/common.sh@47 -- # : 0 00:32:23.629 18:19:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:23.629 18:19:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:23.629 18:19:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:23.629 18:19:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:23.629 18:19:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:23.629 18:19:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:23.629 18:19:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:23.629 18:19:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:23.629 18:19:12 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.629 18:19:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.629 18:19:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.629 18:19:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.629 18:19:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.629 18:19:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.629 18:19:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.629 18:19:12 -- paths/export.sh@5 -- # export PATH 00:32:23.629 18:19:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.629 18:19:12 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:23.629 18:19:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:32:23.629 18:19:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:23.629 18:19:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:32:23.629 18:19:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:32:23.629 18:19:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:32:23.629 18:19:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.629 18:19:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:23.629 18:19:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.629 18:19:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:32:23.629 18:19:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:32:23.629 18:19:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:32:23.629 18:19:12 -- common/autotest_common.sh@10 -- # set +x 00:32:26.162 18:19:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:26.162 18:19:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:32:26.162 18:19:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:26.162 18:19:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:26.162 18:19:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:26.162 18:19:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:26.162 18:19:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:26.162 18:19:14 -- nvmf/common.sh@295 -- # net_devs=() 00:32:26.162 18:19:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:26.162 18:19:14 -- nvmf/common.sh@296 -- # e810=() 00:32:26.162 18:19:14 -- nvmf/common.sh@296 -- # local -ga e810 00:32:26.162 18:19:14 -- nvmf/common.sh@297 -- # x722=() 00:32:26.162 18:19:14 -- nvmf/common.sh@297 -- # local -ga x722 00:32:26.162 18:19:14 -- nvmf/common.sh@298 -- # mlx=() 00:32:26.162 18:19:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:32:26.162 18:19:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:26.162 18:19:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:26.162 18:19:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:26.162 18:19:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:26.162 18:19:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:26.162 18:19:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:26.162 18:19:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:26.162 18:19:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:26.162 18:19:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:26.162 18:19:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:26.162 18:19:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:26.162 18:19:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:26.162 18:19:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:26.162 18:19:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:26.162 18:19:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:26.162 18:19:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:26.162 18:19:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:26.162 18:19:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:26.162 18:19:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:26.162 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:26.162 18:19:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:26.162 18:19:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:26.162 18:19:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.162 18:19:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.162 18:19:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:26.162 18:19:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:26.162 18:19:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:26.162 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:26.162 18:19:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:26.162 18:19:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:26.162 18:19:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.162 18:19:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.162 18:19:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:26.162 18:19:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:26.162 18:19:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:26.162 18:19:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:26.162 18:19:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:26.162 18:19:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.162 18:19:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:32:26.162 18:19:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.162 18:19:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:26.162 Found net devices under 0000:84:00.0: cvl_0_0 00:32:26.162 18:19:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.162 18:19:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:26.162 18:19:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.162 18:19:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:32:26.162 18:19:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.162 18:19:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:26.162 Found net devices under 0000:84:00.1: cvl_0_1 00:32:26.162 18:19:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.163 18:19:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:32:26.163 18:19:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:32:26.163 18:19:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:32:26.163 18:19:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:32:26.163 18:19:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:32:26.163 18:19:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:26.163 18:19:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:26.163 18:19:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:26.163 18:19:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:26.163 18:19:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:26.163 18:19:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:26.163 18:19:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:26.163 18:19:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:26.163 18:19:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:26.163 18:19:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:26.163 18:19:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:26.163 18:19:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:26.163 18:19:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:26.163 18:19:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:26.163 18:19:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:26.163 18:19:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:26.163 18:19:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:26.163 18:19:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:26.163 18:19:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:26.163 18:19:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:26.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:26.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:32:26.163 00:32:26.163 --- 10.0.0.2 ping statistics --- 00:32:26.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.163 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:32:26.163 18:19:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:26.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:26.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:32:26.163 00:32:26.163 --- 10.0.0.1 ping statistics --- 00:32:26.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.163 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:32:26.163 18:19:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:26.163 18:19:14 -- nvmf/common.sh@411 -- # return 0 00:32:26.163 18:19:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:32:26.163 18:19:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:26.163 18:19:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:32:26.163 18:19:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:32:26.163 18:19:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:26.163 18:19:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:32:26.163 18:19:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:32:26.163 18:19:14 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:26.163 18:19:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:26.163 18:19:14 -- common/autotest_common.sh@10 -- # set +x 00:32:26.163 18:19:14 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:26.163 18:19:14 -- common/autotest_common.sh@1510 -- # bdfs=() 00:32:26.163 18:19:14 -- common/autotest_common.sh@1510 -- # local bdfs 00:32:26.163 18:19:14 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:32:26.163 18:19:14 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:32:26.163 18:19:14 -- common/autotest_common.sh@1499 -- # bdfs=() 00:32:26.163 18:19:14 -- common/autotest_common.sh@1499 -- # local bdfs 00:32:26.163 18:19:14 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:26.163 18:19:14 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:26.163 18:19:14 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:32:26.163 18:19:14 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:32:26.163 18:19:14 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:82:00.0 00:32:26.163 18:19:14 -- common/autotest_common.sh@1513 -- # echo 0000:82:00.0 00:32:26.163 18:19:14 -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:32:26.163 18:19:14 -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:32:26.163 18:19:14 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:32:26.163 18:19:14 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:26.163 18:19:14 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:26.163 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.349 18:19:19 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:32:30.349 18:19:19 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:32:30.349 18:19:19 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:30.349 18:19:19 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:30.349 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.569 18:19:23 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:34.569 18:19:23 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:34.569 18:19:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:34.569 18:19:23 -- common/autotest_common.sh@10 -- # set +x 00:32:34.569 18:19:23 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:34.569 18:19:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:34.569 18:19:23 -- common/autotest_common.sh@10 -- # set +x 00:32:34.569 18:19:23 -- target/identify_passthru.sh@31 -- # nvmfpid=3468643 00:32:34.569 18:19:23 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:34.569 18:19:23 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:34.569 18:19:23 -- target/identify_passthru.sh@35 -- # waitforlisten 3468643 00:32:34.569 18:19:23 -- common/autotest_common.sh@817 -- # '[' -z 3468643 ']' 00:32:34.569 18:19:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.569 18:19:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:34.569 18:19:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.569 18:19:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:34.569 18:19:23 -- common/autotest_common.sh@10 -- # set +x 00:32:34.569 [2024-04-15 18:19:23.424830] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:32:34.569 [2024-04-15 18:19:23.425005] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:34.569 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.829 [2024-04-15 18:19:23.543645] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:34.829 [2024-04-15 18:19:23.635010] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:34.829 [2024-04-15 18:19:23.635084] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:34.829 [2024-04-15 18:19:23.635103] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:34.829 [2024-04-15 18:19:23.635118] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:34.829 [2024-04-15 18:19:23.635131] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:34.829 [2024-04-15 18:19:23.635215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.829 [2024-04-15 18:19:23.635270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:34.829 [2024-04-15 18:19:23.635336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:34.829 [2024-04-15 18:19:23.635339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.088 18:19:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:35.088 18:19:23 -- common/autotest_common.sh@850 -- # return 0 00:32:35.088 18:19:23 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:35.088 18:19:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:35.088 18:19:23 -- common/autotest_common.sh@10 -- # set +x 00:32:35.088 INFO: Log level set to 20 00:32:35.088 INFO: Requests: 00:32:35.088 { 00:32:35.088 "jsonrpc": "2.0", 00:32:35.088 "method": "nvmf_set_config", 00:32:35.088 "id": 1, 00:32:35.088 "params": { 00:32:35.088 "admin_cmd_passthru": { 00:32:35.088 "identify_ctrlr": true 00:32:35.088 } 00:32:35.088 } 00:32:35.088 } 00:32:35.088 00:32:35.088 INFO: response: 00:32:35.088 { 00:32:35.088 "jsonrpc": "2.0", 00:32:35.088 "id": 1, 00:32:35.088 "result": true 00:32:35.088 } 00:32:35.088 00:32:35.088 18:19:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:35.088 18:19:23 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:35.088 18:19:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:35.088 18:19:23 -- common/autotest_common.sh@10 -- # set +x 00:32:35.088 INFO: Setting log level to 20 00:32:35.088 INFO: Setting log level to 20 00:32:35.088 INFO: Log level set to 20 00:32:35.088 INFO: Log level set to 20 00:32:35.089 INFO: Requests: 00:32:35.089 { 00:32:35.089 "jsonrpc": "2.0", 00:32:35.089 "method": "framework_start_init", 00:32:35.089 "id": 1 00:32:35.089 } 00:32:35.089 00:32:35.089 INFO: Requests: 00:32:35.089 { 00:32:35.089 "jsonrpc": "2.0", 00:32:35.089 "method": "framework_start_init", 00:32:35.089 "id": 1 00:32:35.089 } 00:32:35.089 00:32:35.089 [2024-04-15 18:19:24.009480] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:35.089 INFO: response: 00:32:35.089 { 00:32:35.089 "jsonrpc": "2.0", 00:32:35.089 "id": 1, 00:32:35.089 "result": true 00:32:35.089 } 00:32:35.089 00:32:35.089 INFO: response: 00:32:35.089 { 00:32:35.089 "jsonrpc": "2.0", 00:32:35.089 "id": 1, 00:32:35.089 "result": true 00:32:35.089 } 00:32:35.089 00:32:35.089 18:19:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:35.089 18:19:24 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:35.089 18:19:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:35.089 18:19:24 -- common/autotest_common.sh@10 -- # set +x 00:32:35.089 INFO: Setting log level to 40 00:32:35.089 INFO: Setting log level to 40 00:32:35.089 INFO: Setting log level to 40 00:32:35.089 [2024-04-15 18:19:24.019681] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:35.089 18:19:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:35.089 18:19:24 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:35.089 18:19:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:35.089 18:19:24 -- common/autotest_common.sh@10 -- # set +x 00:32:35.347 18:19:24 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:32:35.347 18:19:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:35.347 18:19:24 -- common/autotest_common.sh@10 -- # set +x 00:32:38.636 Nvme0n1 00:32:38.636 18:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.636 18:19:26 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:38.636 18:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.636 18:19:26 -- common/autotest_common.sh@10 -- # set +x 00:32:38.636 18:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.636 18:19:26 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:38.636 18:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.636 18:19:26 -- common/autotest_common.sh@10 -- # set +x 00:32:38.636 18:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.636 18:19:26 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:38.636 18:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.636 18:19:26 -- common/autotest_common.sh@10 -- # set +x 00:32:38.636 [2024-04-15 18:19:26.921032] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:38.636 18:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.636 18:19:26 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:38.636 18:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.636 18:19:26 -- common/autotest_common.sh@10 -- # set +x 00:32:38.636 [2024-04-15 18:19:26.928800] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:32:38.636 [ 00:32:38.636 { 00:32:38.636 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:38.636 "subtype": "Discovery", 00:32:38.636 "listen_addresses": [], 00:32:38.636 "allow_any_host": true, 00:32:38.636 "hosts": [] 00:32:38.636 }, 00:32:38.636 { 00:32:38.636 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:38.636 "subtype": "NVMe", 00:32:38.636 "listen_addresses": [ 00:32:38.636 { 00:32:38.636 "transport": "TCP", 00:32:38.636 "trtype": "TCP", 00:32:38.636 "adrfam": "IPv4", 00:32:38.636 "traddr": "10.0.0.2", 00:32:38.636 "trsvcid": "4420" 00:32:38.636 } 00:32:38.636 ], 00:32:38.636 "allow_any_host": true, 00:32:38.636 "hosts": [], 00:32:38.636 "serial_number": "SPDK00000000000001", 00:32:38.636 "model_number": "SPDK bdev Controller", 00:32:38.636 "max_namespaces": 1, 00:32:38.636 "min_cntlid": 1, 00:32:38.636 "max_cntlid": 65519, 00:32:38.636 "namespaces": [ 00:32:38.636 { 00:32:38.636 "nsid": 1, 00:32:38.636 "bdev_name": "Nvme0n1", 00:32:38.636 "name": "Nvme0n1", 00:32:38.636 "nguid": "7B60C214D6DF4F1C8EB22318362AA2F0", 00:32:38.636 "uuid": "7b60c214-d6df-4f1c-8eb2-2318362aa2f0" 00:32:38.636 } 00:32:38.636 ] 00:32:38.636 } 00:32:38.636 ] 00:32:38.636 18:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.636 18:19:26 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:38.636 18:19:26 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:38.636 18:19:26 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:38.636 EAL: No free 2048 kB hugepages reported on node 1 00:32:38.636 18:19:27 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:32:38.636 18:19:27 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:38.636 18:19:27 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:38.636 18:19:27 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:38.636 EAL: No free 2048 kB hugepages reported on node 1 00:32:38.636 18:19:27 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:38.636 18:19:27 -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:32:38.636 18:19:27 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:38.636 18:19:27 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:38.636 18:19:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.636 18:19:27 -- common/autotest_common.sh@10 -- # set +x 00:32:38.636 18:19:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.636 18:19:27 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:38.636 18:19:27 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:38.636 18:19:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:32:38.636 18:19:27 -- nvmf/common.sh@117 -- # sync 00:32:38.636 18:19:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:38.636 18:19:27 -- nvmf/common.sh@120 -- # set +e 00:32:38.636 18:19:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:38.636 18:19:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:38.636 rmmod nvme_tcp 00:32:38.636 rmmod nvme_fabrics 00:32:38.636 rmmod nvme_keyring 00:32:38.636 18:19:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:38.636 18:19:27 -- nvmf/common.sh@124 -- # set -e 00:32:38.636 18:19:27 -- nvmf/common.sh@125 -- # return 0 00:32:38.636 18:19:27 -- nvmf/common.sh@478 -- # '[' -n 3468643 ']' 00:32:38.636 18:19:27 -- nvmf/common.sh@479 -- # killprocess 3468643 00:32:38.636 18:19:27 -- common/autotest_common.sh@936 -- # '[' -z 3468643 ']' 00:32:38.636 18:19:27 -- common/autotest_common.sh@940 -- # kill -0 3468643 00:32:38.636 18:19:27 -- common/autotest_common.sh@941 -- # uname 00:32:38.636 18:19:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:38.636 18:19:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3468643 00:32:38.636 18:19:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:38.636 18:19:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:38.636 18:19:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3468643' 00:32:38.636 killing process with pid 3468643 00:32:38.636 18:19:27 -- common/autotest_common.sh@955 -- # kill 3468643 00:32:38.636 [2024-04-15 18:19:27.539438] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:32:38.636 18:19:27 -- common/autotest_common.sh@960 -- # wait 3468643 00:32:40.541 18:19:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:32:40.541 18:19:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:32:40.541 18:19:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:32:40.541 18:19:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:40.541 18:19:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:40.541 18:19:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.541 18:19:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:40.541 18:19:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.443 18:19:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:42.443 00:32:42.443 real 0m18.872s 00:32:42.443 user 0m28.674s 00:32:42.443 sys 0m2.770s 00:32:42.443 18:19:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:42.443 18:19:31 -- common/autotest_common.sh@10 -- # set +x 00:32:42.444 ************************************ 00:32:42.444 END TEST nvmf_identify_passthru 00:32:42.444 ************************************ 00:32:42.444 18:19:31 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:42.444 18:19:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:42.444 18:19:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:42.444 18:19:31 -- common/autotest_common.sh@10 -- # set +x 00:32:42.444 ************************************ 00:32:42.444 START TEST nvmf_dif 00:32:42.444 ************************************ 00:32:42.444 18:19:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:42.444 * Looking for test storage... 00:32:42.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:42.444 18:19:31 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:42.444 18:19:31 -- nvmf/common.sh@7 -- # uname -s 00:32:42.444 18:19:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:42.444 18:19:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:42.444 18:19:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:42.444 18:19:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:42.444 18:19:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:42.444 18:19:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:42.444 18:19:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:42.444 18:19:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:42.444 18:19:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:42.444 18:19:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:42.444 18:19:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:42.444 18:19:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:42.444 18:19:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:42.444 18:19:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:42.444 18:19:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:42.444 18:19:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:42.444 18:19:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:42.444 18:19:31 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:42.444 18:19:31 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:42.444 18:19:31 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:42.703 18:19:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.703 18:19:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.703 18:19:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.703 18:19:31 -- paths/export.sh@5 -- # export PATH 00:32:42.703 18:19:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.703 18:19:31 -- nvmf/common.sh@47 -- # : 0 00:32:42.703 18:19:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:42.703 18:19:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:42.703 18:19:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:42.703 18:19:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:42.703 18:19:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:42.703 18:19:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:42.703 18:19:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:42.703 18:19:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:42.703 18:19:31 -- target/dif.sh@15 -- # NULL_META=16 00:32:42.703 18:19:31 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:42.703 18:19:31 -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:42.703 18:19:31 -- target/dif.sh@15 -- # NULL_DIF=1 00:32:42.703 18:19:31 -- target/dif.sh@135 -- # nvmftestinit 00:32:42.703 18:19:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:32:42.703 18:19:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:42.703 18:19:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:32:42.703 18:19:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:32:42.703 18:19:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:32:42.703 18:19:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.703 18:19:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:42.703 18:19:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.703 18:19:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:32:42.703 18:19:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:32:42.703 18:19:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:32:42.703 18:19:31 -- common/autotest_common.sh@10 -- # set +x 00:32:44.612 18:19:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:44.612 18:19:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:32:44.612 18:19:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:44.612 18:19:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:44.612 18:19:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:44.612 18:19:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:44.612 18:19:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:44.612 18:19:33 -- nvmf/common.sh@295 -- # net_devs=() 00:32:44.612 18:19:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:44.612 18:19:33 -- nvmf/common.sh@296 -- # e810=() 00:32:44.612 18:19:33 -- nvmf/common.sh@296 -- # local -ga e810 00:32:44.612 18:19:33 -- nvmf/common.sh@297 -- # x722=() 00:32:44.612 18:19:33 -- nvmf/common.sh@297 -- # local -ga x722 00:32:44.612 18:19:33 -- nvmf/common.sh@298 -- # mlx=() 00:32:44.612 18:19:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:32:44.612 18:19:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:44.612 18:19:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:44.612 18:19:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:44.612 18:19:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:44.612 18:19:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:44.612 18:19:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:44.612 18:19:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:44.612 18:19:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:44.612 18:19:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:44.612 18:19:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:44.612 18:19:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:44.612 18:19:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:44.612 18:19:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:44.612 18:19:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:44.612 18:19:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:44.612 18:19:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:44.612 18:19:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:44.612 18:19:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:44.612 18:19:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:44.612 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:44.612 18:19:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:44.612 18:19:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:44.612 18:19:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.612 18:19:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.612 18:19:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:44.612 18:19:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:44.612 18:19:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:44.612 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:44.612 18:19:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:44.612 18:19:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:44.612 18:19:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.612 18:19:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.612 18:19:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:44.612 18:19:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:44.612 18:19:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:44.612 18:19:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:44.612 18:19:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:44.612 18:19:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.612 18:19:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:32:44.612 18:19:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.612 18:19:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:44.612 Found net devices under 0000:84:00.0: cvl_0_0 00:32:44.612 18:19:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.612 18:19:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:44.612 18:19:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.612 18:19:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:32:44.612 18:19:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.612 18:19:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:44.612 Found net devices under 0000:84:00.1: cvl_0_1 00:32:44.612 18:19:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.612 18:19:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:32:44.612 18:19:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:32:44.612 18:19:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:32:44.612 18:19:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:32:44.612 18:19:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:32:44.612 18:19:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:44.612 18:19:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:44.612 18:19:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:44.612 18:19:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:44.612 18:19:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:44.612 18:19:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:44.612 18:19:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:44.612 18:19:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:44.612 18:19:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:44.612 18:19:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:44.612 18:19:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:44.612 18:19:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:44.612 18:19:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:44.870 18:19:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:44.870 18:19:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:44.870 18:19:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:44.870 18:19:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:44.870 18:19:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:44.870 18:19:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:44.870 18:19:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:44.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:44.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:32:44.870 00:32:44.870 --- 10.0.0.2 ping statistics --- 00:32:44.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.870 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:32:44.870 18:19:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:44.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:44.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:32:44.870 00:32:44.870 --- 10.0.0.1 ping statistics --- 00:32:44.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.870 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:32:44.870 18:19:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:44.870 18:19:33 -- nvmf/common.sh@411 -- # return 0 00:32:44.870 18:19:33 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:32:44.870 18:19:33 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:46.247 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:46.247 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:46.247 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:46.247 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:46.247 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:46.247 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:46.247 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:46.247 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:46.247 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:46.247 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:46.247 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:46.247 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:46.247 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:46.247 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:46.247 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:46.247 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:46.247 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:46.247 18:19:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.247 18:19:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:32:46.247 18:19:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:32:46.247 18:19:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.247 18:19:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:32:46.247 18:19:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:32:46.247 18:19:35 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:46.247 18:19:35 -- target/dif.sh@137 -- # nvmfappstart 00:32:46.247 18:19:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:32:46.247 18:19:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:46.247 18:19:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.247 18:19:35 -- nvmf/common.sh@470 -- # nvmfpid=3472005 00:32:46.247 18:19:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:46.247 18:19:35 -- nvmf/common.sh@471 -- # waitforlisten 3472005 00:32:46.247 18:19:35 -- common/autotest_common.sh@817 -- # '[' -z 3472005 ']' 00:32:46.247 18:19:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.247 18:19:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:46.247 18:19:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.247 18:19:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:46.247 18:19:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.247 [2024-04-15 18:19:35.103779] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:32:46.247 [2024-04-15 18:19:35.103868] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:46.247 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.247 [2024-04-15 18:19:35.183492] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.505 [2024-04-15 18:19:35.279319] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:46.505 [2024-04-15 18:19:35.279389] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:46.505 [2024-04-15 18:19:35.279406] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:46.505 [2024-04-15 18:19:35.279421] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:46.505 [2024-04-15 18:19:35.279433] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:46.505 [2024-04-15 18:19:35.279469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.505 18:19:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:46.505 18:19:35 -- common/autotest_common.sh@850 -- # return 0 00:32:46.505 18:19:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:32:46.505 18:19:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:46.505 18:19:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.505 18:19:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:46.505 18:19:35 -- target/dif.sh@139 -- # create_transport 00:32:46.505 18:19:35 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:46.505 18:19:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.505 18:19:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.505 [2024-04-15 18:19:35.429209] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:46.505 18:19:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.505 18:19:35 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:46.505 18:19:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:46.505 18:19:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:46.505 18:19:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.764 ************************************ 00:32:46.764 START TEST fio_dif_1_default 00:32:46.764 ************************************ 00:32:46.764 18:19:35 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:32:46.764 18:19:35 -- target/dif.sh@86 -- # create_subsystems 0 00:32:46.764 18:19:35 -- target/dif.sh@28 -- # local sub 00:32:46.764 18:19:35 -- target/dif.sh@30 -- # for sub in "$@" 00:32:46.764 18:19:35 -- target/dif.sh@31 -- # create_subsystem 0 00:32:46.764 18:19:35 -- target/dif.sh@18 -- # local sub_id=0 00:32:46.764 18:19:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:46.764 18:19:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.764 18:19:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.764 bdev_null0 00:32:46.764 18:19:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.764 18:19:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:46.764 18:19:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.764 18:19:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.764 18:19:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.764 18:19:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:46.764 18:19:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.764 18:19:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.764 18:19:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.764 18:19:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:46.764 18:19:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.764 18:19:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.764 [2024-04-15 18:19:35.553675] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:46.764 18:19:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.764 18:19:35 -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:46.764 18:19:35 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:46.764 18:19:35 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:46.764 18:19:35 -- nvmf/common.sh@521 -- # config=() 00:32:46.764 18:19:35 -- nvmf/common.sh@521 -- # local subsystem config 00:32:46.764 18:19:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:46.765 18:19:35 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:46.765 18:19:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:46.765 { 00:32:46.765 "params": { 00:32:46.765 "name": "Nvme$subsystem", 00:32:46.765 "trtype": "$TEST_TRANSPORT", 00:32:46.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:46.765 "adrfam": "ipv4", 00:32:46.765 "trsvcid": "$NVMF_PORT", 00:32:46.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:46.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:46.765 "hdgst": ${hdgst:-false}, 00:32:46.765 "ddgst": ${ddgst:-false} 00:32:46.765 }, 00:32:46.765 "method": "bdev_nvme_attach_controller" 00:32:46.765 } 00:32:46.765 EOF 00:32:46.765 )") 00:32:46.765 18:19:35 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:46.765 18:19:35 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:46.765 18:19:35 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:46.765 18:19:35 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:46.765 18:19:35 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:46.765 18:19:35 -- common/autotest_common.sh@1327 -- # shift 00:32:46.765 18:19:35 -- target/dif.sh@82 -- # gen_fio_conf 00:32:46.765 18:19:35 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:46.765 18:19:35 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:46.765 18:19:35 -- target/dif.sh@54 -- # local file 00:32:46.765 18:19:35 -- target/dif.sh@56 -- # cat 00:32:46.765 18:19:35 -- nvmf/common.sh@543 -- # cat 00:32:46.765 18:19:35 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:46.765 18:19:35 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:46.765 18:19:35 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:46.765 18:19:35 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:46.765 18:19:35 -- target/dif.sh@72 -- # (( file <= files )) 00:32:46.765 18:19:35 -- nvmf/common.sh@545 -- # jq . 00:32:46.765 18:19:35 -- nvmf/common.sh@546 -- # IFS=, 00:32:46.765 18:19:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:46.765 "params": { 00:32:46.765 "name": "Nvme0", 00:32:46.765 "trtype": "tcp", 00:32:46.765 "traddr": "10.0.0.2", 00:32:46.765 "adrfam": "ipv4", 00:32:46.765 "trsvcid": "4420", 00:32:46.765 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:46.765 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:46.765 "hdgst": false, 00:32:46.765 "ddgst": false 00:32:46.765 }, 00:32:46.765 "method": "bdev_nvme_attach_controller" 00:32:46.765 }' 00:32:46.765 18:19:35 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:46.765 18:19:35 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:46.765 18:19:35 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:46.765 18:19:35 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:46.765 18:19:35 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:32:46.765 18:19:35 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:46.765 18:19:35 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:46.765 18:19:35 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:46.765 18:19:35 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:46.765 18:19:35 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:47.023 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:47.023 fio-3.35 00:32:47.023 Starting 1 thread 00:32:47.023 EAL: No free 2048 kB hugepages reported on node 1 00:32:47.281 [2024-04-15 18:19:36.193175] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:47.281 [2024-04-15 18:19:36.193257] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:59.488 00:32:59.488 filename0: (groupid=0, jobs=1): err= 0: pid=3472220: Mon Apr 15 18:19:46 2024 00:32:59.488 read: IOPS=96, BW=385KiB/s (395kB/s)(3856KiB/10003msec) 00:32:59.488 slat (nsec): min=4664, max=61012, avg=10254.05, stdev=3758.30 00:32:59.488 clat (usec): min=40854, max=47187, avg=41471.05, stdev=618.79 00:32:59.488 lat (usec): min=40873, max=47202, avg=41481.30, stdev=618.72 00:32:59.488 clat percentiles (usec): 00:32:59.488 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:59.488 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:32:59.488 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:59.488 | 99.00th=[42206], 99.50th=[42730], 99.90th=[47449], 99.95th=[47449], 00:32:59.488 | 99.99th=[47449] 00:32:59.488 bw ( KiB/s): min= 352, max= 416, per=99.62%, avg=384.00, stdev=10.38, samples=20 00:32:59.488 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:32:59.488 lat (msec) : 50=100.00% 00:32:59.488 cpu : usr=90.52%, sys=9.20%, ctx=14, majf=0, minf=222 00:32:59.488 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.488 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.488 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:59.488 00:32:59.488 Run status group 0 (all jobs): 00:32:59.488 READ: bw=385KiB/s (395kB/s), 385KiB/s-385KiB/s (395kB/s-395kB/s), io=3856KiB (3949kB), run=10003-10003msec 00:32:59.488 18:19:46 -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:59.488 18:19:46 -- target/dif.sh@43 -- # local sub 00:32:59.488 18:19:46 -- target/dif.sh@45 -- # for sub in "$@" 00:32:59.488 18:19:46 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:59.488 18:19:46 -- target/dif.sh@36 -- # local sub_id=0 00:32:59.488 18:19:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:59.488 18:19:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:59.488 18:19:46 -- common/autotest_common.sh@10 -- # set +x 00:32:59.488 18:19:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:59.488 18:19:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:59.488 18:19:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:59.488 18:19:46 -- common/autotest_common.sh@10 -- # set +x 00:32:59.488 18:19:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:59.488 00:32:59.488 real 0m11.008s 00:32:59.488 user 0m10.102s 00:32:59.488 sys 0m1.176s 00:32:59.488 18:19:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:59.488 18:19:46 -- common/autotest_common.sh@10 -- # set +x 00:32:59.488 ************************************ 00:32:59.488 END TEST fio_dif_1_default 00:32:59.488 ************************************ 00:32:59.488 18:19:46 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:59.488 18:19:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:59.488 18:19:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:59.488 18:19:46 -- common/autotest_common.sh@10 -- # set +x 00:32:59.488 ************************************ 00:32:59.488 START TEST fio_dif_1_multi_subsystems 00:32:59.488 ************************************ 00:32:59.488 18:19:46 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:32:59.488 18:19:46 -- target/dif.sh@92 -- # local files=1 00:32:59.488 18:19:46 -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:59.488 18:19:46 -- target/dif.sh@28 -- # local sub 00:32:59.488 18:19:46 -- target/dif.sh@30 -- # for sub in "$@" 00:32:59.488 18:19:46 -- target/dif.sh@31 -- # create_subsystem 0 00:32:59.488 18:19:46 -- target/dif.sh@18 -- # local sub_id=0 00:32:59.488 18:19:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:59.488 18:19:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:59.488 18:19:46 -- common/autotest_common.sh@10 -- # set +x 00:32:59.488 bdev_null0 00:32:59.488 18:19:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:59.488 18:19:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:59.488 18:19:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:59.488 18:19:46 -- common/autotest_common.sh@10 -- # set +x 00:32:59.488 18:19:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:59.488 18:19:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:59.488 18:19:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:59.488 18:19:46 -- common/autotest_common.sh@10 -- # set +x 00:32:59.488 18:19:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:59.488 18:19:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:59.488 18:19:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:59.488 18:19:46 -- common/autotest_common.sh@10 -- # set +x 00:32:59.488 [2024-04-15 18:19:46.702280] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:59.489 18:19:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:59.489 18:19:46 -- target/dif.sh@30 -- # for sub in "$@" 00:32:59.489 18:19:46 -- target/dif.sh@31 -- # create_subsystem 1 00:32:59.489 18:19:46 -- target/dif.sh@18 -- # local sub_id=1 00:32:59.489 18:19:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:59.489 18:19:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:59.489 18:19:46 -- common/autotest_common.sh@10 -- # set +x 00:32:59.489 bdev_null1 00:32:59.489 18:19:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:59.489 18:19:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:59.489 18:19:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:59.489 18:19:46 -- common/autotest_common.sh@10 -- # set +x 00:32:59.489 18:19:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:59.489 18:19:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:59.489 18:19:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:59.489 18:19:46 -- common/autotest_common.sh@10 -- # set +x 00:32:59.489 18:19:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:59.489 18:19:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:59.489 18:19:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:59.489 18:19:46 -- common/autotest_common.sh@10 -- # set +x 00:32:59.489 18:19:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:59.489 18:19:46 -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:59.489 18:19:46 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:59.489 18:19:46 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:59.489 18:19:46 -- nvmf/common.sh@521 -- # config=() 00:32:59.489 18:19:46 -- nvmf/common.sh@521 -- # local subsystem config 00:32:59.489 18:19:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:59.489 18:19:46 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:59.489 18:19:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:59.489 { 00:32:59.489 "params": { 00:32:59.489 "name": "Nvme$subsystem", 00:32:59.489 "trtype": "$TEST_TRANSPORT", 00:32:59.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:59.489 "adrfam": "ipv4", 00:32:59.489 "trsvcid": "$NVMF_PORT", 00:32:59.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:59.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:59.489 "hdgst": ${hdgst:-false}, 00:32:59.489 "ddgst": ${ddgst:-false} 00:32:59.489 }, 00:32:59.489 "method": "bdev_nvme_attach_controller" 00:32:59.489 } 00:32:59.489 EOF 00:32:59.489 )") 00:32:59.489 18:19:46 -- target/dif.sh@82 -- # gen_fio_conf 00:32:59.489 18:19:46 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:59.489 18:19:46 -- target/dif.sh@54 -- # local file 00:32:59.489 18:19:46 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:59.489 18:19:46 -- target/dif.sh@56 -- # cat 00:32:59.489 18:19:46 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:59.489 18:19:46 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:59.489 18:19:46 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:59.489 18:19:46 -- common/autotest_common.sh@1327 -- # shift 00:32:59.489 18:19:46 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:59.489 18:19:46 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:59.489 18:19:46 -- nvmf/common.sh@543 -- # cat 00:32:59.489 18:19:46 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:59.489 18:19:46 -- target/dif.sh@72 -- # (( file <= files )) 00:32:59.489 18:19:46 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:59.489 18:19:46 -- target/dif.sh@73 -- # cat 00:32:59.489 18:19:46 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:59.489 18:19:46 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:59.489 18:19:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:59.489 18:19:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:59.489 { 00:32:59.489 "params": { 00:32:59.489 "name": "Nvme$subsystem", 00:32:59.489 "trtype": "$TEST_TRANSPORT", 00:32:59.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:59.489 "adrfam": "ipv4", 00:32:59.489 "trsvcid": "$NVMF_PORT", 00:32:59.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:59.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:59.489 "hdgst": ${hdgst:-false}, 00:32:59.489 "ddgst": ${ddgst:-false} 00:32:59.489 }, 00:32:59.489 "method": "bdev_nvme_attach_controller" 00:32:59.489 } 00:32:59.489 EOF 00:32:59.489 )") 00:32:59.489 18:19:46 -- nvmf/common.sh@543 -- # cat 00:32:59.489 18:19:46 -- target/dif.sh@72 -- # (( file++ )) 00:32:59.489 18:19:46 -- target/dif.sh@72 -- # (( file <= files )) 00:32:59.489 18:19:46 -- nvmf/common.sh@545 -- # jq . 00:32:59.489 18:19:46 -- nvmf/common.sh@546 -- # IFS=, 00:32:59.489 18:19:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:59.489 "params": { 00:32:59.489 "name": "Nvme0", 00:32:59.489 "trtype": "tcp", 00:32:59.489 "traddr": "10.0.0.2", 00:32:59.489 "adrfam": "ipv4", 00:32:59.489 "trsvcid": "4420", 00:32:59.489 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:59.489 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:59.489 "hdgst": false, 00:32:59.489 "ddgst": false 00:32:59.489 }, 00:32:59.489 "method": "bdev_nvme_attach_controller" 00:32:59.489 },{ 00:32:59.489 "params": { 00:32:59.489 "name": "Nvme1", 00:32:59.489 "trtype": "tcp", 00:32:59.489 "traddr": "10.0.0.2", 00:32:59.489 "adrfam": "ipv4", 00:32:59.489 "trsvcid": "4420", 00:32:59.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:59.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:59.489 "hdgst": false, 00:32:59.489 "ddgst": false 00:32:59.489 }, 00:32:59.489 "method": "bdev_nvme_attach_controller" 00:32:59.489 }' 00:32:59.489 18:19:46 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:59.489 18:19:46 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:59.489 18:19:46 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:59.489 18:19:46 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:59.489 18:19:46 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:32:59.489 18:19:46 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:59.489 18:19:46 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:59.489 18:19:46 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:59.489 18:19:46 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:59.489 18:19:46 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:59.489 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:59.489 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:59.489 fio-3.35 00:32:59.489 Starting 2 threads 00:32:59.489 EAL: No free 2048 kB hugepages reported on node 1 00:32:59.489 [2024-04-15 18:19:47.730469] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:59.489 [2024-04-15 18:19:47.730601] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:09.450 00:33:09.450 filename0: (groupid=0, jobs=1): err= 0: pid=3473575: Mon Apr 15 18:19:57 2024 00:33:09.450 read: IOPS=186, BW=744KiB/s (762kB/s)(7456KiB/10019msec) 00:33:09.450 slat (nsec): min=5789, max=43114, avg=10392.59, stdev=2617.75 00:33:09.450 clat (usec): min=770, max=43413, avg=21466.94, stdev=20517.21 00:33:09.450 lat (usec): min=779, max=43428, avg=21477.33, stdev=20516.92 00:33:09.450 clat percentiles (usec): 00:33:09.450 | 1.00th=[ 791], 5.00th=[ 807], 10.00th=[ 816], 20.00th=[ 824], 00:33:09.450 | 30.00th=[ 840], 40.00th=[ 865], 50.00th=[41157], 60.00th=[41157], 00:33:09.450 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:09.450 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:33:09.450 | 99.99th=[43254] 00:33:09.450 bw ( KiB/s): min= 672, max= 768, per=49.93%, avg=744.00, stdev=34.24, samples=20 00:33:09.450 iops : min= 168, max= 192, avg=186.00, stdev= 8.56, samples=20 00:33:09.450 lat (usec) : 1000=46.62% 00:33:09.450 lat (msec) : 2=3.17%, 50=50.21% 00:33:09.450 cpu : usr=94.94%, sys=4.68%, ctx=29, majf=0, minf=139 00:33:09.450 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:09.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.450 issued rwts: total=1864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.450 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:09.450 filename1: (groupid=0, jobs=1): err= 0: pid=3473576: Mon Apr 15 18:19:57 2024 00:33:09.450 read: IOPS=186, BW=746KiB/s (764kB/s)(7472KiB/10017msec) 00:33:09.450 slat (nsec): min=8648, max=56467, avg=11240.71, stdev=3094.80 00:33:09.450 clat (usec): min=755, max=42898, avg=21413.72, stdev=20479.55 00:33:09.450 lat (usec): min=764, max=42912, avg=21424.97, stdev=20479.94 00:33:09.450 clat percentiles (usec): 00:33:09.450 | 1.00th=[ 824], 5.00th=[ 832], 10.00th=[ 840], 20.00th=[ 848], 00:33:09.450 | 30.00th=[ 865], 40.00th=[ 898], 50.00th=[40633], 60.00th=[41157], 00:33:09.450 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:09.450 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:09.450 | 99.99th=[42730] 00:33:09.450 bw ( KiB/s): min= 704, max= 768, per=50.00%, avg=745.60, stdev=31.32, samples=20 00:33:09.450 iops : min= 176, max= 192, avg=186.40, stdev= 7.83, samples=20 00:33:09.450 lat (usec) : 1000=42.34% 00:33:09.450 lat (msec) : 2=7.55%, 50=50.11% 00:33:09.450 cpu : usr=94.28%, sys=5.38%, ctx=13, majf=0, minf=157 00:33:09.450 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:09.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.450 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.450 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:09.450 00:33:09.450 Run status group 0 (all jobs): 00:33:09.450 READ: bw=1490KiB/s (1526kB/s), 744KiB/s-746KiB/s (762kB/s-764kB/s), io=14.6MiB (15.3MB), run=10017-10019msec 00:33:09.450 18:19:58 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:09.450 18:19:58 -- target/dif.sh@43 -- # local sub 00:33:09.450 18:19:58 -- target/dif.sh@45 -- # for sub in "$@" 00:33:09.450 18:19:58 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:09.450 18:19:58 -- target/dif.sh@36 -- # local sub_id=0 00:33:09.450 18:19:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:09.450 18:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:09.450 18:19:58 -- common/autotest_common.sh@10 -- # set +x 00:33:09.450 18:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:09.450 18:19:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:09.450 18:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:09.450 18:19:58 -- common/autotest_common.sh@10 -- # set +x 00:33:09.450 18:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:09.450 18:19:58 -- target/dif.sh@45 -- # for sub in "$@" 00:33:09.450 18:19:58 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:09.450 18:19:58 -- target/dif.sh@36 -- # local sub_id=1 00:33:09.450 18:19:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:09.450 18:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:09.450 18:19:58 -- common/autotest_common.sh@10 -- # set +x 00:33:09.450 18:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:09.450 18:19:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:09.450 18:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:09.450 18:19:58 -- common/autotest_common.sh@10 -- # set +x 00:33:09.450 18:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:09.450 00:33:09.450 real 0m11.509s 00:33:09.450 user 0m20.421s 00:33:09.450 sys 0m1.430s 00:33:09.450 18:19:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:09.450 18:19:58 -- common/autotest_common.sh@10 -- # set +x 00:33:09.450 ************************************ 00:33:09.450 END TEST fio_dif_1_multi_subsystems 00:33:09.450 ************************************ 00:33:09.450 18:19:58 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:09.450 18:19:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:09.450 18:19:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:09.450 18:19:58 -- common/autotest_common.sh@10 -- # set +x 00:33:09.450 ************************************ 00:33:09.450 START TEST fio_dif_rand_params 00:33:09.450 ************************************ 00:33:09.450 18:19:58 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:33:09.450 18:19:58 -- target/dif.sh@100 -- # local NULL_DIF 00:33:09.450 18:19:58 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:09.450 18:19:58 -- target/dif.sh@103 -- # NULL_DIF=3 00:33:09.450 18:19:58 -- target/dif.sh@103 -- # bs=128k 00:33:09.450 18:19:58 -- target/dif.sh@103 -- # numjobs=3 00:33:09.450 18:19:58 -- target/dif.sh@103 -- # iodepth=3 00:33:09.450 18:19:58 -- target/dif.sh@103 -- # runtime=5 00:33:09.450 18:19:58 -- target/dif.sh@105 -- # create_subsystems 0 00:33:09.450 18:19:58 -- target/dif.sh@28 -- # local sub 00:33:09.450 18:19:58 -- target/dif.sh@30 -- # for sub in "$@" 00:33:09.450 18:19:58 -- target/dif.sh@31 -- # create_subsystem 0 00:33:09.450 18:19:58 -- target/dif.sh@18 -- # local sub_id=0 00:33:09.450 18:19:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:09.450 18:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:09.450 18:19:58 -- common/autotest_common.sh@10 -- # set +x 00:33:09.450 bdev_null0 00:33:09.450 18:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:09.450 18:19:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:09.450 18:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:09.450 18:19:58 -- common/autotest_common.sh@10 -- # set +x 00:33:09.450 18:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:09.450 18:19:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:09.450 18:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:09.450 18:19:58 -- common/autotest_common.sh@10 -- # set +x 00:33:09.450 18:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:09.450 18:19:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:09.450 18:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:09.450 18:19:58 -- common/autotest_common.sh@10 -- # set +x 00:33:09.450 [2024-04-15 18:19:58.373210] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:09.450 18:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:09.450 18:19:58 -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:09.450 18:19:58 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:09.450 18:19:58 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:09.450 18:19:58 -- nvmf/common.sh@521 -- # config=() 00:33:09.450 18:19:58 -- nvmf/common.sh@521 -- # local subsystem config 00:33:09.450 18:19:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:33:09.450 18:19:58 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:09.450 18:19:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:33:09.450 { 00:33:09.450 "params": { 00:33:09.450 "name": "Nvme$subsystem", 00:33:09.450 "trtype": "$TEST_TRANSPORT", 00:33:09.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:09.450 "adrfam": "ipv4", 00:33:09.450 "trsvcid": "$NVMF_PORT", 00:33:09.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:09.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:09.450 "hdgst": ${hdgst:-false}, 00:33:09.450 "ddgst": ${ddgst:-false} 00:33:09.450 }, 00:33:09.450 "method": "bdev_nvme_attach_controller" 00:33:09.450 } 00:33:09.450 EOF 00:33:09.450 )") 00:33:09.450 18:19:58 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:09.450 18:19:58 -- target/dif.sh@82 -- # gen_fio_conf 00:33:09.450 18:19:58 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:33:09.450 18:19:58 -- target/dif.sh@54 -- # local file 00:33:09.450 18:19:58 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:09.450 18:19:58 -- common/autotest_common.sh@1325 -- # local sanitizers 00:33:09.450 18:19:58 -- target/dif.sh@56 -- # cat 00:33:09.450 18:19:58 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:09.450 18:19:58 -- common/autotest_common.sh@1327 -- # shift 00:33:09.450 18:19:58 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:33:09.450 18:19:58 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:09.450 18:19:58 -- nvmf/common.sh@543 -- # cat 00:33:09.451 18:19:58 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:09.451 18:19:58 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:09.451 18:19:58 -- common/autotest_common.sh@1331 -- # grep libasan 00:33:09.451 18:19:58 -- target/dif.sh@72 -- # (( file <= files )) 00:33:09.451 18:19:58 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:09.451 18:19:58 -- nvmf/common.sh@545 -- # jq . 00:33:09.451 18:19:58 -- nvmf/common.sh@546 -- # IFS=, 00:33:09.451 18:19:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:33:09.451 "params": { 00:33:09.451 "name": "Nvme0", 00:33:09.451 "trtype": "tcp", 00:33:09.451 "traddr": "10.0.0.2", 00:33:09.451 "adrfam": "ipv4", 00:33:09.451 "trsvcid": "4420", 00:33:09.451 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:09.451 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:09.451 "hdgst": false, 00:33:09.451 "ddgst": false 00:33:09.451 }, 00:33:09.451 "method": "bdev_nvme_attach_controller" 00:33:09.451 }' 00:33:09.709 18:19:58 -- common/autotest_common.sh@1331 -- # asan_lib= 00:33:09.709 18:19:58 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:33:09.709 18:19:58 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:09.709 18:19:58 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:09.709 18:19:58 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:33:09.709 18:19:58 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:09.709 18:19:58 -- common/autotest_common.sh@1331 -- # asan_lib= 00:33:09.709 18:19:58 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:33:09.709 18:19:58 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:09.709 18:19:58 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:09.967 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:09.967 ... 00:33:09.967 fio-3.35 00:33:09.967 Starting 3 threads 00:33:09.967 EAL: No free 2048 kB hugepages reported on node 1 00:33:10.225 [2024-04-15 18:19:59.139240] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:10.225 [2024-04-15 18:19:59.139328] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:15.490 00:33:15.490 filename0: (groupid=0, jobs=1): err= 0: pid=3474981: Mon Apr 15 18:20:04 2024 00:33:15.490 read: IOPS=212, BW=26.6MiB/s (27.8MB/s)(133MiB/5004msec) 00:33:15.490 slat (nsec): min=6750, max=32805, avg=16231.02, stdev=3569.32 00:33:15.490 clat (usec): min=5380, max=91829, avg=14098.81, stdev=12087.15 00:33:15.490 lat (usec): min=5394, max=91844, avg=14115.05, stdev=12087.23 00:33:15.490 clat percentiles (usec): 00:33:15.490 | 1.00th=[ 5800], 5.00th=[ 6259], 10.00th=[ 6587], 20.00th=[ 8291], 00:33:15.490 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10814], 60.00th=[11994], 00:33:15.490 | 70.00th=[13042], 80.00th=[14222], 90.00th=[16581], 95.00th=[51119], 00:33:15.490 | 99.00th=[53740], 99.50th=[54789], 99.90th=[90702], 99.95th=[91751], 00:33:15.490 | 99.99th=[91751] 00:33:15.490 bw ( KiB/s): min=18432, max=33346, per=38.20%, avg=27142.60, stdev=4260.21, samples=10 00:33:15.490 iops : min= 144, max= 260, avg=212.00, stdev=33.20, samples=10 00:33:15.490 lat (msec) : 10=43.46%, 20=48.35%, 50=1.51%, 100=6.68% 00:33:15.490 cpu : usr=94.74%, sys=4.78%, ctx=11, majf=0, minf=66 00:33:15.490 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.490 issued rwts: total=1063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.490 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:15.490 filename0: (groupid=0, jobs=1): err= 0: pid=3474982: Mon Apr 15 18:20:04 2024 00:33:15.490 read: IOPS=166, BW=20.8MiB/s (21.8MB/s)(104MiB/5026msec) 00:33:15.490 slat (nsec): min=8175, max=96339, avg=20167.75, stdev=5432.94 00:33:15.490 clat (usec): min=5832, max=96885, avg=18028.62, stdev=16063.45 00:33:15.490 lat (usec): min=5851, max=96911, avg=18048.79, stdev=16063.43 00:33:15.490 clat percentiles (usec): 00:33:15.490 | 1.00th=[ 6325], 5.00th=[ 6915], 10.00th=[ 8291], 20.00th=[ 9634], 00:33:15.490 | 30.00th=[10552], 40.00th=[11731], 50.00th=[12780], 60.00th=[13566], 00:33:15.490 | 70.00th=[14615], 80.00th=[15926], 90.00th=[51643], 95.00th=[54264], 00:33:15.490 | 99.00th=[89654], 99.50th=[93848], 99.90th=[96994], 99.95th=[96994], 00:33:15.490 | 99.99th=[96994] 00:33:15.490 bw ( KiB/s): min=14080, max=27392, per=29.98%, avg=21299.20, stdev=3707.44, samples=10 00:33:15.490 iops : min= 110, max= 214, avg=166.40, stdev=28.96, samples=10 00:33:15.490 lat (msec) : 10=24.55%, 20=61.68%, 50=0.96%, 100=12.81% 00:33:15.490 cpu : usr=94.41%, sys=5.00%, ctx=14, majf=0, minf=109 00:33:15.490 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.490 issued rwts: total=835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.490 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:15.490 filename0: (groupid=0, jobs=1): err= 0: pid=3474983: Mon Apr 15 18:20:04 2024 00:33:15.490 read: IOPS=178, BW=22.3MiB/s (23.4MB/s)(113MiB/5043msec) 00:33:15.490 slat (nsec): min=5003, max=40787, avg=18177.60, stdev=3858.66 00:33:15.490 clat (usec): min=5620, max=93903, avg=16721.17, stdev=14389.61 00:33:15.490 lat (usec): min=5634, max=93922, avg=16739.35, stdev=14389.48 00:33:15.490 clat percentiles (usec): 00:33:15.490 | 1.00th=[ 5997], 5.00th=[ 6259], 10.00th=[ 6718], 20.00th=[ 9110], 00:33:15.490 | 30.00th=[ 9896], 40.00th=[10945], 50.00th=[12125], 60.00th=[13173], 00:33:15.490 | 70.00th=[14353], 80.00th=[16057], 90.00th=[49546], 95.00th=[51643], 00:33:15.490 | 99.00th=[55837], 99.50th=[56886], 99.90th=[93848], 99.95th=[93848], 00:33:15.490 | 99.99th=[93848] 00:33:15.490 bw ( KiB/s): min=16640, max=32000, per=32.39%, avg=23014.40, stdev=5241.51, samples=10 00:33:15.490 iops : min= 130, max= 250, avg=179.80, stdev=40.95, samples=10 00:33:15.490 lat (msec) : 10=32.30%, 20=54.72%, 50=4.33%, 100=8.66% 00:33:15.490 cpu : usr=94.98%, sys=4.56%, ctx=6, majf=0, minf=25 00:33:15.490 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.490 issued rwts: total=901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.490 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:15.490 00:33:15.490 Run status group 0 (all jobs): 00:33:15.490 READ: bw=69.4MiB/s (72.7MB/s), 20.8MiB/s-26.6MiB/s (21.8MB/s-27.8MB/s), io=350MiB (367MB), run=5004-5043msec 00:33:15.749 18:20:04 -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:15.749 18:20:04 -- target/dif.sh@43 -- # local sub 00:33:15.749 18:20:04 -- target/dif.sh@45 -- # for sub in "$@" 00:33:15.749 18:20:04 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:15.749 18:20:04 -- target/dif.sh@36 -- # local sub_id=0 00:33:15.749 18:20:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:15.749 18:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.749 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.749 18:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.749 18:20:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:15.749 18:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.749 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.749 18:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.749 18:20:04 -- target/dif.sh@109 -- # NULL_DIF=2 00:33:15.749 18:20:04 -- target/dif.sh@109 -- # bs=4k 00:33:15.749 18:20:04 -- target/dif.sh@109 -- # numjobs=8 00:33:15.749 18:20:04 -- target/dif.sh@109 -- # iodepth=16 00:33:15.749 18:20:04 -- target/dif.sh@109 -- # runtime= 00:33:15.749 18:20:04 -- target/dif.sh@109 -- # files=2 00:33:15.749 18:20:04 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:15.749 18:20:04 -- target/dif.sh@28 -- # local sub 00:33:15.749 18:20:04 -- target/dif.sh@30 -- # for sub in "$@" 00:33:15.749 18:20:04 -- target/dif.sh@31 -- # create_subsystem 0 00:33:15.749 18:20:04 -- target/dif.sh@18 -- # local sub_id=0 00:33:15.749 18:20:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:15.749 18:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.749 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.749 bdev_null0 00:33:15.749 18:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.749 18:20:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:15.749 18:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.749 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.749 18:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.749 18:20:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:15.749 18:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.750 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.750 18:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.750 18:20:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:15.750 18:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.750 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.750 [2024-04-15 18:20:04.607922] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:15.750 18:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.750 18:20:04 -- target/dif.sh@30 -- # for sub in "$@" 00:33:15.750 18:20:04 -- target/dif.sh@31 -- # create_subsystem 1 00:33:15.750 18:20:04 -- target/dif.sh@18 -- # local sub_id=1 00:33:15.750 18:20:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:15.750 18:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.750 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.750 bdev_null1 00:33:15.750 18:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.750 18:20:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:15.750 18:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.750 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.750 18:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.750 18:20:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:15.750 18:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.750 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.750 18:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.750 18:20:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:15.750 18:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.750 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.750 18:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.750 18:20:04 -- target/dif.sh@30 -- # for sub in "$@" 00:33:15.750 18:20:04 -- target/dif.sh@31 -- # create_subsystem 2 00:33:15.750 18:20:04 -- target/dif.sh@18 -- # local sub_id=2 00:33:15.750 18:20:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:15.750 18:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.750 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.750 bdev_null2 00:33:15.750 18:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.750 18:20:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:15.750 18:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.750 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.750 18:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.750 18:20:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:15.750 18:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.750 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.750 18:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.750 18:20:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:15.750 18:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.750 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.750 18:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.750 18:20:04 -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:15.750 18:20:04 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:15.750 18:20:04 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:15.750 18:20:04 -- nvmf/common.sh@521 -- # config=() 00:33:15.750 18:20:04 -- nvmf/common.sh@521 -- # local subsystem config 00:33:15.750 18:20:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:33:15.750 18:20:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:33:15.750 { 00:33:15.750 "params": { 00:33:15.750 "name": "Nvme$subsystem", 00:33:15.750 "trtype": "$TEST_TRANSPORT", 00:33:15.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:15.750 "adrfam": "ipv4", 00:33:15.750 "trsvcid": "$NVMF_PORT", 00:33:15.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:15.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:15.750 "hdgst": ${hdgst:-false}, 00:33:15.750 "ddgst": ${ddgst:-false} 00:33:15.750 }, 00:33:15.750 "method": "bdev_nvme_attach_controller" 00:33:15.750 } 00:33:15.750 EOF 00:33:15.750 )") 00:33:15.750 18:20:04 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:15.750 18:20:04 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:15.750 18:20:04 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:33:15.750 18:20:04 -- target/dif.sh@82 -- # gen_fio_conf 00:33:15.750 18:20:04 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:15.750 18:20:04 -- common/autotest_common.sh@1325 -- # local sanitizers 00:33:15.750 18:20:04 -- target/dif.sh@54 -- # local file 00:33:15.750 18:20:04 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:15.750 18:20:04 -- common/autotest_common.sh@1327 -- # shift 00:33:15.750 18:20:04 -- target/dif.sh@56 -- # cat 00:33:15.750 18:20:04 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:33:15.750 18:20:04 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:15.750 18:20:04 -- nvmf/common.sh@543 -- # cat 00:33:15.750 18:20:04 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:15.750 18:20:04 -- common/autotest_common.sh@1331 -- # grep libasan 00:33:15.750 18:20:04 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:15.750 18:20:04 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:15.750 18:20:04 -- target/dif.sh@72 -- # (( file <= files )) 00:33:15.750 18:20:04 -- target/dif.sh@73 -- # cat 00:33:15.750 18:20:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:33:15.750 18:20:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:33:15.750 { 00:33:15.750 "params": { 00:33:15.750 "name": "Nvme$subsystem", 00:33:15.750 "trtype": "$TEST_TRANSPORT", 00:33:15.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:15.750 "adrfam": "ipv4", 00:33:15.750 "trsvcid": "$NVMF_PORT", 00:33:15.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:15.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:15.750 "hdgst": ${hdgst:-false}, 00:33:15.750 "ddgst": ${ddgst:-false} 00:33:15.750 }, 00:33:15.750 "method": "bdev_nvme_attach_controller" 00:33:15.750 } 00:33:15.750 EOF 00:33:15.750 )") 00:33:15.750 18:20:04 -- nvmf/common.sh@543 -- # cat 00:33:15.750 18:20:04 -- target/dif.sh@72 -- # (( file++ )) 00:33:15.750 18:20:04 -- target/dif.sh@72 -- # (( file <= files )) 00:33:15.750 18:20:04 -- target/dif.sh@73 -- # cat 00:33:15.750 18:20:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:33:15.750 18:20:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:33:15.750 { 00:33:15.750 "params": { 00:33:15.750 "name": "Nvme$subsystem", 00:33:15.750 "trtype": "$TEST_TRANSPORT", 00:33:15.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:15.750 "adrfam": "ipv4", 00:33:15.750 "trsvcid": "$NVMF_PORT", 00:33:15.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:15.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:15.750 "hdgst": ${hdgst:-false}, 00:33:15.750 "ddgst": ${ddgst:-false} 00:33:15.750 }, 00:33:15.750 "method": "bdev_nvme_attach_controller" 00:33:15.750 } 00:33:15.750 EOF 00:33:15.750 )") 00:33:15.750 18:20:04 -- target/dif.sh@72 -- # (( file++ )) 00:33:15.750 18:20:04 -- target/dif.sh@72 -- # (( file <= files )) 00:33:15.750 18:20:04 -- nvmf/common.sh@543 -- # cat 00:33:15.750 18:20:04 -- nvmf/common.sh@545 -- # jq . 00:33:15.750 18:20:04 -- nvmf/common.sh@546 -- # IFS=, 00:33:15.750 18:20:04 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:33:15.750 "params": { 00:33:15.750 "name": "Nvme0", 00:33:15.750 "trtype": "tcp", 00:33:15.750 "traddr": "10.0.0.2", 00:33:15.750 "adrfam": "ipv4", 00:33:15.750 "trsvcid": "4420", 00:33:15.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:15.750 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:15.750 "hdgst": false, 00:33:15.750 "ddgst": false 00:33:15.750 }, 00:33:15.750 "method": "bdev_nvme_attach_controller" 00:33:15.750 },{ 00:33:15.750 "params": { 00:33:15.750 "name": "Nvme1", 00:33:15.750 "trtype": "tcp", 00:33:15.750 "traddr": "10.0.0.2", 00:33:15.750 "adrfam": "ipv4", 00:33:15.750 "trsvcid": "4420", 00:33:15.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:15.750 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:15.750 "hdgst": false, 00:33:15.750 "ddgst": false 00:33:15.750 }, 00:33:15.750 "method": "bdev_nvme_attach_controller" 00:33:15.750 },{ 00:33:15.750 "params": { 00:33:15.750 "name": "Nvme2", 00:33:15.750 "trtype": "tcp", 00:33:15.750 "traddr": "10.0.0.2", 00:33:15.750 "adrfam": "ipv4", 00:33:15.750 "trsvcid": "4420", 00:33:15.750 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:15.750 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:15.750 "hdgst": false, 00:33:15.750 "ddgst": false 00:33:15.750 }, 00:33:15.750 "method": "bdev_nvme_attach_controller" 00:33:15.750 }' 00:33:16.009 18:20:04 -- common/autotest_common.sh@1331 -- # asan_lib= 00:33:16.009 18:20:04 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:33:16.009 18:20:04 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:16.009 18:20:04 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:16.009 18:20:04 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:33:16.009 18:20:04 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:16.009 18:20:04 -- common/autotest_common.sh@1331 -- # asan_lib= 00:33:16.009 18:20:04 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:33:16.009 18:20:04 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:16.009 18:20:04 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:16.268 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:16.268 ... 00:33:16.268 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:16.268 ... 00:33:16.268 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:16.268 ... 00:33:16.268 fio-3.35 00:33:16.268 Starting 24 threads 00:33:16.268 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.206 [2024-04-15 18:20:06.000301] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:17.206 [2024-04-15 18:20:06.000390] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:29.444 00:33:29.444 filename0: (groupid=0, jobs=1): err= 0: pid=3475841: Mon Apr 15 18:20:16 2024 00:33:29.444 read: IOPS=231, BW=925KiB/s (947kB/s)(9344KiB/10100msec) 00:33:29.444 slat (usec): min=9, max=169, avg=39.28, stdev=14.90 00:33:29.444 clat (msec): min=25, max=484, avg=68.80, stdev=97.51 00:33:29.445 lat (msec): min=25, max=484, avg=68.84, stdev=97.50 00:33:29.445 clat percentiles (msec): 00:33:29.445 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.445 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:29.445 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 226], 95.00th=[ 355], 00:33:29.445 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 472], 99.95th=[ 485], 00:33:29.445 | 99.99th=[ 485] 00:33:29.445 bw ( KiB/s): min= 128, max= 1920, per=4.06%, avg=928.00, stdev=784.52, samples=20 00:33:29.445 iops : min= 32, max= 480, avg=232.00, stdev=196.13, samples=20 00:33:29.445 lat (msec) : 50=89.04%, 100=0.68%, 250=0.68%, 500=9.59% 00:33:29.445 cpu : usr=97.60%, sys=1.55%, ctx=132, majf=0, minf=22 00:33:29.445 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:29.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.445 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.445 issued rwts: total=2336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.445 filename0: (groupid=0, jobs=1): err= 0: pid=3475842: Mon Apr 15 18:20:16 2024 00:33:29.445 read: IOPS=242, BW=969KiB/s (992kB/s)(9792KiB/10106msec) 00:33:29.445 slat (nsec): min=8166, max=73263, avg=34444.62, stdev=13740.23 00:33:29.445 clat (msec): min=32, max=370, avg=65.54, stdev=75.41 00:33:29.445 lat (msec): min=32, max=370, avg=65.58, stdev=75.40 00:33:29.445 clat percentiles (msec): 00:33:29.445 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.445 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:33:29.445 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 222], 95.00th=[ 266], 00:33:29.445 | 99.00th=[ 326], 99.50th=[ 372], 99.90th=[ 372], 99.95th=[ 372], 00:33:29.445 | 99.99th=[ 372] 00:33:29.445 bw ( KiB/s): min= 256, max= 1920, per=4.26%, avg=972.80, stdev=745.67, samples=20 00:33:29.445 iops : min= 64, max= 480, avg=243.20, stdev=186.42, samples=20 00:33:29.445 lat (msec) : 50=85.62%, 100=0.65%, 250=5.80%, 500=7.92% 00:33:29.445 cpu : usr=95.78%, sys=2.21%, ctx=299, majf=0, minf=28 00:33:29.445 IO depths : 1=5.8%, 2=12.0%, 4=24.6%, 8=50.9%, 16=6.7%, 32=0.0%, >=64=0.0% 00:33:29.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.445 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.445 issued rwts: total=2448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.445 filename0: (groupid=0, jobs=1): err= 0: pid=3475843: Mon Apr 15 18:20:16 2024 00:33:29.445 read: IOPS=232, BW=930KiB/s (952kB/s)(9384KiB/10093msec) 00:33:29.445 slat (nsec): min=7012, max=93344, avg=25752.07, stdev=18116.25 00:33:29.445 clat (msec): min=14, max=540, avg=68.61, stdev=98.23 00:33:29.445 lat (msec): min=15, max=540, avg=68.63, stdev=98.23 00:33:29.445 clat percentiles (msec): 00:33:29.445 | 1.00th=[ 22], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.445 | 30.00th=[ 35], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:33:29.445 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 197], 95.00th=[ 347], 00:33:29.445 | 99.00th=[ 397], 99.50th=[ 514], 99.90th=[ 535], 99.95th=[ 542], 00:33:29.445 | 99.99th=[ 542] 00:33:29.445 bw ( KiB/s): min= 128, max= 1904, per=4.08%, avg=932.00, stdev=788.99, samples=20 00:33:29.445 iops : min= 32, max= 476, avg=233.00, stdev=197.25, samples=20 00:33:29.445 lat (msec) : 20=0.51%, 50=87.55%, 100=1.71%, 250=1.19%, 500=8.53% 00:33:29.445 lat (msec) : 750=0.51% 00:33:29.445 cpu : usr=97.86%, sys=1.61%, ctx=41, majf=0, minf=37 00:33:29.445 IO depths : 1=1.8%, 2=5.5%, 4=15.3%, 8=64.9%, 16=12.4%, 32=0.0%, >=64=0.0% 00:33:29.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.445 complete : 0=0.0%, 4=92.0%, 8=4.1%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.445 issued rwts: total=2346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.445 filename0: (groupid=0, jobs=1): err= 0: pid=3475844: Mon Apr 15 18:20:16 2024 00:33:29.445 read: IOPS=246, BW=985KiB/s (1009kB/s)(9976KiB/10128msec) 00:33:29.445 slat (usec): min=4, max=108, avg=30.97, stdev=17.39 00:33:29.445 clat (msec): min=13, max=337, avg=64.62, stdev=72.35 00:33:29.445 lat (msec): min=13, max=337, avg=64.65, stdev=72.34 00:33:29.445 clat percentiles (msec): 00:33:29.445 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.445 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:33:29.445 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 218], 95.00th=[ 266], 00:33:29.445 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 338], 00:33:29.445 | 99.99th=[ 338] 00:33:29.445 bw ( KiB/s): min= 144, max= 1920, per=4.34%, avg=991.20, stdev=754.68, samples=20 00:33:29.445 iops : min= 36, max= 480, avg=247.80, stdev=188.67, samples=20 00:33:29.445 lat (msec) : 20=0.64%, 50=85.32%, 250=6.74%, 500=7.30% 00:33:29.445 cpu : usr=94.44%, sys=2.97%, ctx=191, majf=0, minf=41 00:33:29.445 IO depths : 1=5.4%, 2=11.6%, 4=24.9%, 8=51.1%, 16=7.1%, 32=0.0%, >=64=0.0% 00:33:29.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.445 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.445 issued rwts: total=2494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.445 filename0: (groupid=0, jobs=1): err= 0: pid=3475845: Mon Apr 15 18:20:16 2024 00:33:29.445 read: IOPS=231, BW=926KiB/s (948kB/s)(9344KiB/10092msec) 00:33:29.445 slat (usec): min=9, max=132, avg=38.07, stdev=14.77 00:33:29.445 clat (msec): min=27, max=484, avg=68.73, stdev=97.39 00:33:29.445 lat (msec): min=27, max=484, avg=68.77, stdev=97.38 00:33:29.445 clat percentiles (msec): 00:33:29.445 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.445 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:29.445 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 226], 95.00th=[ 355], 00:33:29.445 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 485], 00:33:29.445 | 99.99th=[ 485] 00:33:29.445 bw ( KiB/s): min= 128, max= 1920, per=4.06%, avg=928.15, stdev=784.17, samples=20 00:33:29.445 iops : min= 32, max= 480, avg=232.00, stdev=196.01, samples=20 00:33:29.445 lat (msec) : 50=89.04%, 100=0.68%, 250=0.77%, 500=9.50% 00:33:29.445 cpu : usr=98.06%, sys=1.41%, ctx=35, majf=0, minf=29 00:33:29.445 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:33:29.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.445 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.445 issued rwts: total=2336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.445 filename0: (groupid=0, jobs=1): err= 0: pid=3475846: Mon Apr 15 18:20:16 2024 00:33:29.445 read: IOPS=244, BW=979KiB/s (1002kB/s)(9904KiB/10120msec) 00:33:29.445 slat (usec): min=8, max=155, avg=41.84, stdev=27.39 00:33:29.445 clat (msec): min=23, max=365, avg=64.76, stdev=72.82 00:33:29.445 lat (msec): min=23, max=365, avg=64.80, stdev=72.81 00:33:29.445 clat percentiles (msec): 00:33:29.445 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.445 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:29.445 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 230], 95.00th=[ 249], 00:33:29.445 | 99.00th=[ 292], 99.50th=[ 321], 99.90th=[ 368], 99.95th=[ 368], 00:33:29.445 | 99.99th=[ 368] 00:33:29.445 bw ( KiB/s): min= 192, max= 1920, per=4.32%, avg=986.40, stdev=754.20, samples=20 00:33:29.445 iops : min= 48, max= 480, avg=246.60, stdev=188.55, samples=20 00:33:29.445 lat (msec) : 50=85.95%, 250=9.61%, 500=4.44% 00:33:29.445 cpu : usr=96.57%, sys=2.08%, ctx=177, majf=0, minf=30 00:33:29.445 IO depths : 1=5.3%, 2=10.9%, 4=22.9%, 8=53.6%, 16=7.2%, 32=0.0%, >=64=0.0% 00:33:29.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.445 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.445 issued rwts: total=2476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.445 filename0: (groupid=0, jobs=1): err= 0: pid=3475847: Mon Apr 15 18:20:16 2024 00:33:29.445 read: IOPS=243, BW=975KiB/s (999kB/s)(9856KiB/10105msec) 00:33:29.445 slat (nsec): min=7920, max=93569, avg=34938.71, stdev=15548.01 00:33:29.445 clat (msec): min=30, max=276, avg=65.31, stdev=72.53 00:33:29.445 lat (msec): min=30, max=276, avg=65.34, stdev=72.52 00:33:29.445 clat percentiles (msec): 00:33:29.445 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.445 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:33:29.445 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 213], 95.00th=[ 266], 00:33:29.445 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:33:29.445 | 99.99th=[ 275] 00:33:29.445 bw ( KiB/s): min= 128, max= 1920, per=4.29%, avg=979.20, stdev=739.72, samples=20 00:33:29.445 iops : min= 32, max= 480, avg=244.80, stdev=184.93, samples=20 00:33:29.445 lat (msec) : 50=85.63%, 100=0.08%, 250=6.49%, 500=7.79% 00:33:29.445 cpu : usr=97.17%, sys=1.95%, ctx=47, majf=0, minf=28 00:33:29.445 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:29.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.445 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.445 issued rwts: total=2464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.445 filename0: (groupid=0, jobs=1): err= 0: pid=3475848: Mon Apr 15 18:20:16 2024 00:33:29.445 read: IOPS=230, BW=922KiB/s (944kB/s)(9304KiB/10088msec) 00:33:29.445 slat (usec): min=8, max=139, avg=32.27, stdev=16.37 00:33:29.445 clat (msec): min=24, max=467, avg=68.94, stdev=97.62 00:33:29.445 lat (msec): min=24, max=467, avg=68.98, stdev=97.62 00:33:29.445 clat percentiles (msec): 00:33:29.445 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.445 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:33:29.445 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 148], 95.00th=[ 355], 00:33:29.445 | 99.00th=[ 393], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 468], 00:33:29.445 | 99.99th=[ 468] 00:33:29.446 bw ( KiB/s): min= 128, max= 1920, per=4.05%, avg=924.00, stdev=786.73, samples=20 00:33:29.446 iops : min= 32, max= 480, avg=231.00, stdev=196.68, samples=20 00:33:29.446 lat (msec) : 50=88.39%, 100=1.29%, 250=0.69%, 500=9.63% 00:33:29.446 cpu : usr=97.04%, sys=1.79%, ctx=84, majf=0, minf=33 00:33:29.446 IO depths : 1=2.1%, 2=8.0%, 4=23.8%, 8=55.6%, 16=10.5%, 32=0.0%, >=64=0.0% 00:33:29.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.446 complete : 0=0.0%, 4=94.0%, 8=0.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.446 issued rwts: total=2326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.446 filename1: (groupid=0, jobs=1): err= 0: pid=3475849: Mon Apr 15 18:20:16 2024 00:33:29.446 read: IOPS=237, BW=949KiB/s (972kB/s)(9576KiB/10088msec) 00:33:29.446 slat (nsec): min=7980, max=81319, avg=28430.98, stdev=11438.96 00:33:29.446 clat (msec): min=14, max=524, avg=67.16, stdev=84.99 00:33:29.446 lat (msec): min=14, max=524, avg=67.19, stdev=84.98 00:33:29.446 clat percentiles (msec): 00:33:29.446 | 1.00th=[ 26], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.446 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:33:29.446 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 232], 95.00th=[ 279], 00:33:29.446 | 99.00th=[ 384], 99.50th=[ 393], 99.90th=[ 397], 99.95th=[ 527], 00:33:29.446 | 99.99th=[ 527] 00:33:29.446 bw ( KiB/s): min= 128, max= 1968, per=4.17%, avg=951.20, stdev=767.59, samples=20 00:33:29.446 iops : min= 32, max= 492, avg=237.80, stdev=191.90, samples=20 00:33:29.446 lat (msec) : 20=0.42%, 50=85.96%, 100=1.17%, 250=5.18%, 500=7.18% 00:33:29.446 lat (msec) : 750=0.08% 00:33:29.446 cpu : usr=98.19%, sys=1.35%, ctx=24, majf=0, minf=25 00:33:29.446 IO depths : 1=5.3%, 2=10.8%, 4=22.7%, 8=53.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:33:29.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.446 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.446 issued rwts: total=2394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.446 filename1: (groupid=0, jobs=1): err= 0: pid=3475850: Mon Apr 15 18:20:16 2024 00:33:29.446 read: IOPS=233, BW=932KiB/s (955kB/s)(9408KiB/10091msec) 00:33:29.446 slat (nsec): min=8131, max=47253, avg=21179.59, stdev=8154.57 00:33:29.446 clat (msec): min=14, max=468, avg=68.46, stdev=96.89 00:33:29.446 lat (msec): min=14, max=468, avg=68.48, stdev=96.89 00:33:29.446 clat percentiles (msec): 00:33:29.446 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.446 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:33:29.446 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 249], 95.00th=[ 355], 00:33:29.446 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 456], 99.95th=[ 468], 00:33:29.446 | 99.99th=[ 468] 00:33:29.446 bw ( KiB/s): min= 128, max= 1920, per=4.09%, avg=934.40, stdev=791.22, samples=20 00:33:29.446 iops : min= 32, max= 480, avg=233.60, stdev=197.80, samples=20 00:33:29.446 lat (msec) : 20=0.68%, 50=88.44%, 100=0.68%, 250=0.77%, 500=9.44% 00:33:29.446 cpu : usr=98.27%, sys=1.31%, ctx=16, majf=0, minf=25 00:33:29.446 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:29.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.446 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.446 issued rwts: total=2352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.446 filename1: (groupid=0, jobs=1): err= 0: pid=3475851: Mon Apr 15 18:20:16 2024 00:33:29.446 read: IOPS=246, BW=985KiB/s (1009kB/s)(9976KiB/10123msec) 00:33:29.446 slat (usec): min=8, max=148, avg=43.58, stdev=28.64 00:33:29.446 clat (msec): min=8, max=322, avg=64.48, stdev=72.42 00:33:29.446 lat (msec): min=8, max=322, avg=64.53, stdev=72.41 00:33:29.446 clat percentiles (msec): 00:33:29.446 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.446 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:29.446 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 213], 95.00th=[ 266], 00:33:29.446 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 321], 00:33:29.446 | 99.99th=[ 321] 00:33:29.446 bw ( KiB/s): min= 144, max= 1920, per=4.34%, avg=991.20, stdev=754.68, samples=20 00:33:29.446 iops : min= 36, max= 480, avg=247.80, stdev=188.67, samples=20 00:33:29.446 lat (msec) : 10=0.64%, 50=85.32%, 250=6.58%, 500=7.46% 00:33:29.446 cpu : usr=97.46%, sys=1.78%, ctx=106, majf=0, minf=31 00:33:29.446 IO depths : 1=5.6%, 2=11.8%, 4=24.9%, 8=50.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:33:29.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.446 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.446 issued rwts: total=2494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.446 filename1: (groupid=0, jobs=1): err= 0: pid=3475852: Mon Apr 15 18:20:16 2024 00:33:29.446 read: IOPS=242, BW=971KiB/s (995kB/s)(9816KiB/10105msec) 00:33:29.446 slat (usec): min=7, max=107, avg=35.54, stdev=17.81 00:33:29.446 clat (msec): min=25, max=338, avg=65.51, stdev=73.97 00:33:29.446 lat (msec): min=25, max=338, avg=65.55, stdev=73.96 00:33:29.446 clat percentiles (msec): 00:33:29.446 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.446 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:33:29.446 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 224], 95.00th=[ 266], 00:33:29.446 | 99.00th=[ 275], 99.50th=[ 305], 99.90th=[ 321], 99.95th=[ 338], 00:33:29.446 | 99.99th=[ 338] 00:33:29.446 bw ( KiB/s): min= 144, max= 1920, per=4.27%, avg=975.20, stdev=743.55, samples=20 00:33:29.446 iops : min= 36, max= 480, avg=243.80, stdev=185.89, samples=20 00:33:29.446 lat (msec) : 50=86.06%, 250=6.28%, 500=7.66% 00:33:29.446 cpu : usr=96.80%, sys=1.98%, ctx=93, majf=0, minf=21 00:33:29.446 IO depths : 1=5.3%, 2=11.4%, 4=24.4%, 8=51.7%, 16=7.2%, 32=0.0%, >=64=0.0% 00:33:29.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.446 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.446 issued rwts: total=2454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.446 filename1: (groupid=0, jobs=1): err= 0: pid=3475853: Mon Apr 15 18:20:16 2024 00:33:29.446 read: IOPS=232, BW=932KiB/s (954kB/s)(9408KiB/10099msec) 00:33:29.446 slat (nsec): min=6136, max=63770, avg=27654.85, stdev=9558.55 00:33:29.446 clat (msec): min=32, max=471, avg=68.42, stdev=94.87 00:33:29.446 lat (msec): min=32, max=471, avg=68.45, stdev=94.87 00:33:29.446 clat percentiles (msec): 00:33:29.446 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.446 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:33:29.446 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 211], 95.00th=[ 351], 00:33:29.446 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 464], 99.95th=[ 472], 00:33:29.446 | 99.99th=[ 472] 00:33:29.446 bw ( KiB/s): min= 128, max= 1920, per=4.09%, avg=934.55, stdev=786.73, samples=20 00:33:29.446 iops : min= 32, max= 480, avg=233.60, stdev=196.65, samples=20 00:33:29.446 lat (msec) : 50=88.44%, 100=0.68%, 250=1.36%, 500=9.52% 00:33:29.446 cpu : usr=98.39%, sys=1.21%, ctx=13, majf=0, minf=30 00:33:29.446 IO depths : 1=5.7%, 2=11.8%, 4=24.7%, 8=51.0%, 16=6.8%, 32=0.0%, >=64=0.0% 00:33:29.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.446 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.446 issued rwts: total=2352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.446 filename1: (groupid=0, jobs=1): err= 0: pid=3475854: Mon Apr 15 18:20:16 2024 00:33:29.446 read: IOPS=241, BW=968KiB/s (991kB/s)(9784KiB/10112msec) 00:33:29.446 slat (usec): min=7, max=133, avg=36.45, stdev=16.73 00:33:29.446 clat (msec): min=32, max=528, avg=65.79, stdev=76.91 00:33:29.446 lat (msec): min=32, max=529, avg=65.83, stdev=76.90 00:33:29.446 clat percentiles (msec): 00:33:29.446 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.446 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:29.446 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 222], 95.00th=[ 266], 00:33:29.446 | 99.00th=[ 347], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 531], 00:33:29.446 | 99.99th=[ 531] 00:33:29.446 bw ( KiB/s): min= 144, max= 1920, per=4.26%, avg=972.00, stdev=747.10, samples=20 00:33:29.446 iops : min= 36, max= 480, avg=243.00, stdev=186.77, samples=20 00:33:29.446 lat (msec) : 50=86.35%, 250=6.13%, 500=7.44%, 750=0.08% 00:33:29.446 cpu : usr=97.38%, sys=1.78%, ctx=36, majf=0, minf=35 00:33:29.446 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:33:29.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.446 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.446 issued rwts: total=2446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.446 filename1: (groupid=0, jobs=1): err= 0: pid=3475855: Mon Apr 15 18:20:16 2024 00:33:29.446 read: IOPS=241, BW=967KiB/s (991kB/s)(9776KiB/10106msec) 00:33:29.446 slat (usec): min=7, max=120, avg=36.75, stdev=17.22 00:33:29.446 clat (msec): min=31, max=387, avg=65.60, stdev=76.29 00:33:29.446 lat (msec): min=31, max=387, avg=65.63, stdev=76.28 00:33:29.446 clat percentiles (msec): 00:33:29.446 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.446 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:29.446 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 230], 95.00th=[ 266], 00:33:29.446 | 99.00th=[ 351], 99.50th=[ 376], 99.90th=[ 388], 99.95th=[ 388], 00:33:29.446 | 99.99th=[ 388] 00:33:29.446 bw ( KiB/s): min= 160, max= 1920, per=4.25%, avg=971.20, stdev=746.65, samples=20 00:33:29.446 iops : min= 40, max= 480, avg=242.80, stdev=186.66, samples=20 00:33:29.446 lat (msec) : 50=86.42%, 250=7.53%, 500=6.06% 00:33:29.446 cpu : usr=98.18%, sys=1.41%, ctx=15, majf=0, minf=23 00:33:29.446 IO depths : 1=5.4%, 2=10.9%, 4=23.0%, 8=53.5%, 16=7.2%, 32=0.0%, >=64=0.0% 00:33:29.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.446 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.446 issued rwts: total=2444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.446 filename1: (groupid=0, jobs=1): err= 0: pid=3475856: Mon Apr 15 18:20:16 2024 00:33:29.446 read: IOPS=244, BW=979KiB/s (1003kB/s)(9912KiB/10121msec) 00:33:29.446 slat (nsec): min=4369, max=52667, avg=19055.79, stdev=9666.39 00:33:29.446 clat (msec): min=18, max=334, avg=65.09, stdev=73.12 00:33:29.446 lat (msec): min=18, max=334, avg=65.11, stdev=73.12 00:33:29.446 clat percentiles (msec): 00:33:29.446 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.446 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 38], 60.00th=[ 38], 00:33:29.446 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 215], 95.00th=[ 266], 00:33:29.446 | 99.00th=[ 288], 99.50th=[ 313], 99.90th=[ 334], 99.95th=[ 334], 00:33:29.446 | 99.99th=[ 334] 00:33:29.446 bw ( KiB/s): min= 240, max= 1920, per=4.31%, avg=985.60, stdev=754.56, samples=20 00:33:29.446 iops : min= 60, max= 480, avg=246.40, stdev=188.64, samples=20 00:33:29.446 lat (msec) : 20=0.28%, 50=85.59%, 250=7.18%, 500=6.94% 00:33:29.446 cpu : usr=96.74%, sys=2.02%, ctx=68, majf=0, minf=33 00:33:29.446 IO depths : 1=5.6%, 2=11.8%, 4=24.7%, 8=51.0%, 16=6.9%, 32=0.0%, >=64=0.0% 00:33:29.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.446 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.446 issued rwts: total=2478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.447 filename2: (groupid=0, jobs=1): err= 0: pid=3475857: Mon Apr 15 18:20:16 2024 00:33:29.447 read: IOPS=247, BW=989KiB/s (1013kB/s)(9984KiB/10093msec) 00:33:29.447 slat (usec): min=5, max=114, avg=21.23, stdev=13.20 00:33:29.447 clat (msec): min=8, max=277, avg=64.51, stdev=72.17 00:33:29.447 lat (msec): min=8, max=277, avg=64.54, stdev=72.17 00:33:29.447 clat percentiles (msec): 00:33:29.447 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.447 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:33:29.447 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 211], 95.00th=[ 266], 00:33:29.447 | 99.00th=[ 275], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 279], 00:33:29.447 | 99.99th=[ 279] 00:33:29.447 bw ( KiB/s): min= 256, max= 1920, per=4.34%, avg=992.00, stdev=762.22, samples=20 00:33:29.447 iops : min= 64, max= 480, avg=248.00, stdev=190.56, samples=20 00:33:29.447 lat (msec) : 10=0.64%, 50=85.26%, 250=6.41%, 500=7.69% 00:33:29.447 cpu : usr=97.59%, sys=1.62%, ctx=70, majf=0, minf=46 00:33:29.447 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:29.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.447 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.447 issued rwts: total=2496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.447 filename2: (groupid=0, jobs=1): err= 0: pid=3475858: Mon Apr 15 18:20:16 2024 00:33:29.447 read: IOPS=244, BW=979KiB/s (1003kB/s)(9912KiB/10120msec) 00:33:29.447 slat (usec): min=8, max=137, avg=31.52, stdev=17.50 00:33:29.447 clat (msec): min=23, max=320, avg=65.01, stdev=72.47 00:33:29.447 lat (msec): min=23, max=320, avg=65.04, stdev=72.46 00:33:29.447 clat percentiles (msec): 00:33:29.447 | 1.00th=[ 25], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.447 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:33:29.447 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 215], 95.00th=[ 266], 00:33:29.447 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 321], 00:33:29.447 | 99.99th=[ 321] 00:33:29.447 bw ( KiB/s): min= 144, max= 1920, per=4.31%, avg=984.80, stdev=747.92, samples=20 00:33:29.447 iops : min= 36, max= 480, avg=246.20, stdev=186.98, samples=20 00:33:29.447 lat (msec) : 50=85.88%, 250=6.70%, 500=7.43% 00:33:29.447 cpu : usr=98.11%, sys=1.34%, ctx=50, majf=0, minf=29 00:33:29.447 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:33:29.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.447 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.447 issued rwts: total=2478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.447 filename2: (groupid=0, jobs=1): err= 0: pid=3475859: Mon Apr 15 18:20:16 2024 00:33:29.447 read: IOPS=231, BW=926KiB/s (948kB/s)(9344KiB/10093msec) 00:33:29.447 slat (usec): min=8, max=107, avg=35.27, stdev=14.05 00:33:29.447 clat (msec): min=32, max=484, avg=68.75, stdev=97.52 00:33:29.447 lat (msec): min=32, max=484, avg=68.79, stdev=97.52 00:33:29.447 clat percentiles (msec): 00:33:29.447 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.447 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:29.447 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 226], 95.00th=[ 355], 00:33:29.447 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 472], 99.95th=[ 485], 00:33:29.447 | 99.99th=[ 485] 00:33:29.447 bw ( KiB/s): min= 128, max= 1920, per=4.06%, avg=928.00, stdev=784.52, samples=20 00:33:29.447 iops : min= 32, max= 480, avg=232.00, stdev=196.13, samples=20 00:33:29.447 lat (msec) : 50=89.04%, 100=0.68%, 250=0.68%, 500=9.59% 00:33:29.447 cpu : usr=94.81%, sys=2.72%, ctx=314, majf=0, minf=36 00:33:29.447 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:29.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.447 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.447 issued rwts: total=2336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.447 filename2: (groupid=0, jobs=1): err= 0: pid=3475860: Mon Apr 15 18:20:16 2024 00:33:29.447 read: IOPS=231, BW=926KiB/s (948kB/s)(9344KiB/10096msec) 00:33:29.447 slat (usec): min=9, max=131, avg=55.73, stdev=22.98 00:33:29.447 clat (msec): min=31, max=483, avg=68.61, stdev=97.65 00:33:29.447 lat (msec): min=31, max=483, avg=68.66, stdev=97.66 00:33:29.447 clat percentiles (msec): 00:33:29.447 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:33:29.447 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:29.447 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 226], 95.00th=[ 355], 00:33:29.447 | 99.00th=[ 393], 99.50th=[ 439], 99.90th=[ 472], 99.95th=[ 485], 00:33:29.447 | 99.99th=[ 485] 00:33:29.447 bw ( KiB/s): min= 128, max= 1920, per=4.06%, avg=928.00, stdev=784.16, samples=20 00:33:29.447 iops : min= 32, max= 480, avg=232.00, stdev=196.04, samples=20 00:33:29.447 lat (msec) : 50=89.04%, 100=0.68%, 250=0.68%, 500=9.59% 00:33:29.447 cpu : usr=96.92%, sys=1.91%, ctx=235, majf=0, minf=27 00:33:29.447 IO depths : 1=5.8%, 2=12.0%, 4=24.8%, 8=50.7%, 16=6.7%, 32=0.0%, >=64=0.0% 00:33:29.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.447 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.447 issued rwts: total=2336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.447 filename2: (groupid=0, jobs=1): err= 0: pid=3475861: Mon Apr 15 18:20:16 2024 00:33:29.447 read: IOPS=243, BW=975KiB/s (999kB/s)(9856KiB/10105msec) 00:33:29.447 slat (usec): min=7, max=111, avg=35.78, stdev=15.45 00:33:29.447 clat (msec): min=32, max=276, avg=65.30, stdev=72.55 00:33:29.447 lat (msec): min=32, max=276, avg=65.33, stdev=72.54 00:33:29.447 clat percentiles (msec): 00:33:29.447 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.447 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:33:29.447 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 213], 95.00th=[ 266], 00:33:29.447 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:33:29.447 | 99.99th=[ 275] 00:33:29.447 bw ( KiB/s): min= 128, max= 1920, per=4.29%, avg=979.20, stdev=739.72, samples=20 00:33:29.447 iops : min= 32, max= 480, avg=244.80, stdev=184.93, samples=20 00:33:29.447 lat (msec) : 50=85.71%, 250=6.49%, 500=7.79% 00:33:29.447 cpu : usr=96.85%, sys=1.99%, ctx=43, majf=0, minf=26 00:33:29.447 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:29.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.447 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.447 issued rwts: total=2464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.447 filename2: (groupid=0, jobs=1): err= 0: pid=3475862: Mon Apr 15 18:20:16 2024 00:33:29.447 read: IOPS=232, BW=929KiB/s (951kB/s)(9368KiB/10088msec) 00:33:29.447 slat (usec): min=7, max=284, avg=24.30, stdev=11.26 00:33:29.447 clat (msec): min=14, max=394, avg=68.70, stdev=96.71 00:33:29.447 lat (msec): min=14, max=394, avg=68.72, stdev=96.71 00:33:29.447 clat percentiles (msec): 00:33:29.447 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.447 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:33:29.447 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 249], 95.00th=[ 355], 00:33:29.447 | 99.00th=[ 393], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:33:29.447 | 99.99th=[ 397] 00:33:29.447 bw ( KiB/s): min= 128, max= 1920, per=4.07%, avg=930.40, stdev=786.96, samples=20 00:33:29.447 iops : min= 32, max= 480, avg=232.60, stdev=196.74, samples=20 00:33:29.447 lat (msec) : 20=0.34%, 50=88.64%, 100=0.77%, 250=0.68%, 500=9.56% 00:33:29.447 cpu : usr=95.78%, sys=2.39%, ctx=187, majf=0, minf=31 00:33:29.447 IO depths : 1=5.5%, 2=11.3%, 4=23.5%, 8=52.4%, 16=7.3%, 32=0.0%, >=64=0.0% 00:33:29.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.447 complete : 0=0.0%, 4=93.8%, 8=0.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.447 issued rwts: total=2342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.447 filename2: (groupid=0, jobs=1): err= 0: pid=3475863: Mon Apr 15 18:20:16 2024 00:33:29.447 read: IOPS=231, BW=925KiB/s (948kB/s)(9344KiB/10098msec) 00:33:29.447 slat (usec): min=7, max=107, avg=39.69, stdev=14.19 00:33:29.447 clat (msec): min=32, max=457, avg=68.76, stdev=97.38 00:33:29.447 lat (msec): min=32, max=457, avg=68.80, stdev=97.38 00:33:29.447 clat percentiles (msec): 00:33:29.447 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:33:29.447 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:29.447 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 226], 95.00th=[ 355], 00:33:29.447 | 99.00th=[ 393], 99.50th=[ 397], 99.90th=[ 447], 99.95th=[ 460], 00:33:29.447 | 99.99th=[ 460] 00:33:29.447 bw ( KiB/s): min= 128, max= 1920, per=4.06%, avg=928.00, stdev=784.28, samples=20 00:33:29.447 iops : min= 32, max= 480, avg=232.00, stdev=196.07, samples=20 00:33:29.447 lat (msec) : 50=89.04%, 100=0.68%, 250=0.68%, 500=9.59% 00:33:29.447 cpu : usr=96.38%, sys=2.21%, ctx=59, majf=0, minf=30 00:33:29.447 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:29.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.447 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.447 issued rwts: total=2336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.447 filename2: (groupid=0, jobs=1): err= 0: pid=3475864: Mon Apr 15 18:20:16 2024 00:33:29.447 read: IOPS=236, BW=947KiB/s (969kB/s)(9536KiB/10075msec) 00:33:29.447 slat (usec): min=7, max=117, avg=48.78, stdev=24.33 00:33:29.447 clat (msec): min=28, max=395, avg=67.20, stdev=90.47 00:33:29.447 lat (msec): min=28, max=395, avg=67.25, stdev=90.47 00:33:29.447 clat percentiles (msec): 00:33:29.447 | 1.00th=[ 31], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:33:29.447 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:33:29.447 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 201], 95.00th=[ 347], 00:33:29.447 | 99.00th=[ 393], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:33:29.447 | 99.99th=[ 397] 00:33:29.447 bw ( KiB/s): min= 128, max= 1920, per=4.15%, avg=947.20, stdev=780.44, samples=20 00:33:29.447 iops : min= 32, max= 480, avg=236.80, stdev=195.11, samples=20 00:33:29.447 lat (msec) : 50=88.59%, 250=2.01%, 500=9.40% 00:33:29.447 cpu : usr=97.86%, sys=1.61%, ctx=27, majf=0, minf=29 00:33:29.447 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:33:29.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.447 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.447 issued rwts: total=2384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.447 00:33:29.447 Run status group 0 (all jobs): 00:33:29.447 READ: bw=22.3MiB/s (23.4MB/s), 922KiB/s-989KiB/s (944kB/s-1013kB/s), io=226MiB (237MB), run=10075-10128msec 00:33:29.447 18:20:16 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:29.447 18:20:16 -- target/dif.sh@43 -- # local sub 00:33:29.447 18:20:16 -- target/dif.sh@45 -- # for sub in "$@" 00:33:29.447 18:20:16 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:29.447 18:20:16 -- target/dif.sh@36 -- # local sub_id=0 00:33:29.448 18:20:16 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:29.448 18:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:29.448 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:33:29.448 18:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:29.448 18:20:16 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:29.448 18:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:29.448 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:33:29.448 18:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:29.448 18:20:16 -- target/dif.sh@45 -- # for sub in "$@" 00:33:29.448 18:20:16 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:29.448 18:20:16 -- target/dif.sh@36 -- # local sub_id=1 00:33:29.448 18:20:16 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:29.448 18:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:29.448 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:33:29.448 18:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:29.448 18:20:16 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:29.448 18:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:29.448 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:33:29.448 18:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:29.448 18:20:16 -- target/dif.sh@45 -- # for sub in "$@" 00:33:29.448 18:20:16 -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:29.448 18:20:16 -- target/dif.sh@36 -- # local sub_id=2 00:33:29.448 18:20:16 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:29.448 18:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:29.448 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:33:29.448 18:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:29.448 18:20:16 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:29.448 18:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:29.448 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:33:29.448 18:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:29.448 18:20:16 -- target/dif.sh@115 -- # NULL_DIF=1 00:33:29.448 18:20:16 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:29.448 18:20:16 -- target/dif.sh@115 -- # numjobs=2 00:33:29.448 18:20:16 -- target/dif.sh@115 -- # iodepth=8 00:33:29.448 18:20:16 -- target/dif.sh@115 -- # runtime=5 00:33:29.448 18:20:16 -- target/dif.sh@115 -- # files=1 00:33:29.448 18:20:16 -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:29.448 18:20:16 -- target/dif.sh@28 -- # local sub 00:33:29.448 18:20:16 -- target/dif.sh@30 -- # for sub in "$@" 00:33:29.448 18:20:16 -- target/dif.sh@31 -- # create_subsystem 0 00:33:29.448 18:20:16 -- target/dif.sh@18 -- # local sub_id=0 00:33:29.448 18:20:16 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:29.448 18:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:29.448 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:33:29.448 bdev_null0 00:33:29.448 18:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:29.448 18:20:16 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:29.448 18:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:29.448 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:33:29.448 18:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:29.448 18:20:16 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:29.448 18:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:29.448 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:33:29.448 18:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:29.448 18:20:16 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:29.448 18:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:29.448 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:33:29.448 [2024-04-15 18:20:16.658163] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:29.448 18:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:29.448 18:20:16 -- target/dif.sh@30 -- # for sub in "$@" 00:33:29.448 18:20:16 -- target/dif.sh@31 -- # create_subsystem 1 00:33:29.448 18:20:16 -- target/dif.sh@18 -- # local sub_id=1 00:33:29.448 18:20:16 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:29.448 18:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:29.448 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:33:29.448 bdev_null1 00:33:29.448 18:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:29.448 18:20:16 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:29.448 18:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:29.448 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:33:29.448 18:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:29.448 18:20:16 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:29.448 18:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:29.448 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:33:29.448 18:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:29.448 18:20:16 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:29.448 18:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:29.448 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:33:29.448 18:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:29.448 18:20:16 -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:29.448 18:20:16 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:29.448 18:20:16 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:29.448 18:20:16 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:29.448 18:20:16 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:29.448 18:20:16 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:33:29.448 18:20:16 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:29.448 18:20:16 -- common/autotest_common.sh@1325 -- # local sanitizers 00:33:29.448 18:20:16 -- target/dif.sh@82 -- # gen_fio_conf 00:33:29.448 18:20:16 -- nvmf/common.sh@521 -- # config=() 00:33:29.448 18:20:16 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:29.448 18:20:16 -- common/autotest_common.sh@1327 -- # shift 00:33:29.448 18:20:16 -- nvmf/common.sh@521 -- # local subsystem config 00:33:29.448 18:20:16 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:33:29.448 18:20:16 -- target/dif.sh@54 -- # local file 00:33:29.448 18:20:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:29.448 18:20:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:33:29.448 18:20:16 -- target/dif.sh@56 -- # cat 00:33:29.448 18:20:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:33:29.448 { 00:33:29.448 "params": { 00:33:29.448 "name": "Nvme$subsystem", 00:33:29.448 "trtype": "$TEST_TRANSPORT", 00:33:29.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:29.448 "adrfam": "ipv4", 00:33:29.448 "trsvcid": "$NVMF_PORT", 00:33:29.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:29.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:29.448 "hdgst": ${hdgst:-false}, 00:33:29.448 "ddgst": ${ddgst:-false} 00:33:29.448 }, 00:33:29.448 "method": "bdev_nvme_attach_controller" 00:33:29.448 } 00:33:29.448 EOF 00:33:29.448 )") 00:33:29.448 18:20:16 -- nvmf/common.sh@543 -- # cat 00:33:29.448 18:20:16 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:29.448 18:20:16 -- common/autotest_common.sh@1331 -- # grep libasan 00:33:29.448 18:20:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:29.448 18:20:16 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:29.448 18:20:16 -- target/dif.sh@72 -- # (( file <= files )) 00:33:29.448 18:20:16 -- target/dif.sh@73 -- # cat 00:33:29.448 18:20:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:33:29.448 18:20:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:33:29.448 { 00:33:29.448 "params": { 00:33:29.448 "name": "Nvme$subsystem", 00:33:29.448 "trtype": "$TEST_TRANSPORT", 00:33:29.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:29.448 "adrfam": "ipv4", 00:33:29.448 "trsvcid": "$NVMF_PORT", 00:33:29.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:29.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:29.448 "hdgst": ${hdgst:-false}, 00:33:29.448 "ddgst": ${ddgst:-false} 00:33:29.448 }, 00:33:29.448 "method": "bdev_nvme_attach_controller" 00:33:29.448 } 00:33:29.448 EOF 00:33:29.448 )") 00:33:29.448 18:20:16 -- target/dif.sh@72 -- # (( file++ )) 00:33:29.448 18:20:16 -- nvmf/common.sh@543 -- # cat 00:33:29.448 18:20:16 -- target/dif.sh@72 -- # (( file <= files )) 00:33:29.448 18:20:16 -- nvmf/common.sh@545 -- # jq . 00:33:29.448 18:20:16 -- nvmf/common.sh@546 -- # IFS=, 00:33:29.448 18:20:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:33:29.448 "params": { 00:33:29.448 "name": "Nvme0", 00:33:29.448 "trtype": "tcp", 00:33:29.448 "traddr": "10.0.0.2", 00:33:29.448 "adrfam": "ipv4", 00:33:29.448 "trsvcid": "4420", 00:33:29.448 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:29.448 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:29.448 "hdgst": false, 00:33:29.448 "ddgst": false 00:33:29.448 }, 00:33:29.448 "method": "bdev_nvme_attach_controller" 00:33:29.448 },{ 00:33:29.448 "params": { 00:33:29.448 "name": "Nvme1", 00:33:29.448 "trtype": "tcp", 00:33:29.448 "traddr": "10.0.0.2", 00:33:29.448 "adrfam": "ipv4", 00:33:29.448 "trsvcid": "4420", 00:33:29.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:29.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:29.448 "hdgst": false, 00:33:29.448 "ddgst": false 00:33:29.448 }, 00:33:29.448 "method": "bdev_nvme_attach_controller" 00:33:29.448 }' 00:33:29.448 18:20:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:33:29.448 18:20:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:33:29.448 18:20:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:29.448 18:20:16 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:29.448 18:20:16 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:33:29.448 18:20:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:29.448 18:20:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:33:29.448 18:20:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:33:29.448 18:20:16 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:29.448 18:20:16 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:29.448 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:29.448 ... 00:33:29.448 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:29.448 ... 00:33:29.448 fio-3.35 00:33:29.448 Starting 4 threads 00:33:29.448 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.448 [2024-04-15 18:20:17.531767] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:29.448 [2024-04-15 18:20:17.531837] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:34.711 00:33:34.711 filename0: (groupid=0, jobs=1): err= 0: pid=3477249: Mon Apr 15 18:20:22 2024 00:33:34.711 read: IOPS=1770, BW=13.8MiB/s (14.5MB/s)(69.2MiB/5004msec) 00:33:34.711 slat (nsec): min=4193, max=56984, avg=13085.81, stdev=5686.64 00:33:34.711 clat (usec): min=983, max=8274, avg=4475.46, stdev=765.47 00:33:34.711 lat (usec): min=997, max=8290, avg=4488.54, stdev=765.71 00:33:34.711 clat percentiles (usec): 00:33:34.711 | 1.00th=[ 2638], 5.00th=[ 3294], 10.00th=[ 3654], 20.00th=[ 4015], 00:33:34.711 | 30.00th=[ 4146], 40.00th=[ 4293], 50.00th=[ 4490], 60.00th=[ 4555], 00:33:34.711 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5276], 95.00th=[ 6194], 00:33:34.711 | 99.00th=[ 6915], 99.50th=[ 7242], 99.90th=[ 7701], 99.95th=[ 7832], 00:33:34.711 | 99.99th=[ 8291] 00:33:34.711 bw ( KiB/s): min=13312, max=15632, per=25.62%, avg=14166.40, stdev=700.16, samples=10 00:33:34.711 iops : min= 1664, max= 1954, avg=1770.80, stdev=87.52, samples=10 00:33:34.711 lat (usec) : 1000=0.01% 00:33:34.711 lat (msec) : 2=0.09%, 4=18.96%, 10=80.94% 00:33:34.711 cpu : usr=94.32%, sys=5.20%, ctx=9, majf=0, minf=25 00:33:34.711 IO depths : 1=0.1%, 2=6.0%, 4=66.4%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:34.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.711 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.711 issued rwts: total=8862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.711 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:34.711 filename0: (groupid=0, jobs=1): err= 0: pid=3477250: Mon Apr 15 18:20:22 2024 00:33:34.711 read: IOPS=1712, BW=13.4MiB/s (14.0MB/s)(66.9MiB/5001msec) 00:33:34.711 slat (nsec): min=4197, max=59814, avg=14794.45, stdev=7421.31 00:33:34.711 clat (usec): min=887, max=8458, avg=4625.22, stdev=697.03 00:33:34.711 lat (usec): min=902, max=8473, avg=4640.01, stdev=696.76 00:33:34.711 clat percentiles (usec): 00:33:34.711 | 1.00th=[ 3326], 5.00th=[ 3818], 10.00th=[ 3949], 20.00th=[ 4146], 00:33:34.711 | 30.00th=[ 4293], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4621], 00:33:34.711 | 70.00th=[ 4686], 80.00th=[ 4817], 90.00th=[ 5604], 95.00th=[ 6259], 00:33:34.711 | 99.00th=[ 6915], 99.50th=[ 7177], 99.90th=[ 7963], 99.95th=[ 8094], 00:33:34.711 | 99.99th=[ 8455] 00:33:34.711 bw ( KiB/s): min=13312, max=14112, per=24.86%, avg=13742.22, stdev=259.28, samples=9 00:33:34.711 iops : min= 1664, max= 1764, avg=1717.78, stdev=32.41, samples=9 00:33:34.711 lat (usec) : 1000=0.01% 00:33:34.711 lat (msec) : 2=0.02%, 4=11.50%, 10=88.46% 00:33:34.711 cpu : usr=93.98%, sys=5.52%, ctx=13, majf=0, minf=75 00:33:34.711 IO depths : 1=0.1%, 2=5.7%, 4=66.2%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:34.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.711 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.711 issued rwts: total=8562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.711 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:34.711 filename1: (groupid=0, jobs=1): err= 0: pid=3477251: Mon Apr 15 18:20:22 2024 00:33:34.711 read: IOPS=1703, BW=13.3MiB/s (14.0MB/s)(66.6MiB/5003msec) 00:33:34.711 slat (nsec): min=4277, max=57801, avg=14162.64, stdev=7028.35 00:33:34.711 clat (usec): min=1038, max=9588, avg=4650.43, stdev=697.37 00:33:34.711 lat (usec): min=1053, max=9601, avg=4664.59, stdev=697.62 00:33:34.711 clat percentiles (usec): 00:33:34.711 | 1.00th=[ 3163], 5.00th=[ 3818], 10.00th=[ 3982], 20.00th=[ 4228], 00:33:34.711 | 30.00th=[ 4359], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4621], 00:33:34.711 | 70.00th=[ 4686], 80.00th=[ 4948], 90.00th=[ 5538], 95.00th=[ 6194], 00:33:34.711 | 99.00th=[ 7111], 99.50th=[ 7373], 99.90th=[ 8225], 99.95th=[ 8356], 00:33:34.711 | 99.99th=[ 9634] 00:33:34.711 bw ( KiB/s): min=13056, max=14304, per=24.65%, avg=13629.80, stdev=423.21, samples=10 00:33:34.711 iops : min= 1632, max= 1788, avg=1703.70, stdev=52.92, samples=10 00:33:34.711 lat (msec) : 2=0.05%, 4=10.77%, 10=89.18% 00:33:34.711 cpu : usr=94.12%, sys=5.40%, ctx=11, majf=0, minf=35 00:33:34.711 IO depths : 1=0.1%, 2=4.2%, 4=68.1%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:34.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.711 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.711 issued rwts: total=8525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.711 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:34.711 filename1: (groupid=0, jobs=1): err= 0: pid=3477252: Mon Apr 15 18:20:22 2024 00:33:34.711 read: IOPS=1725, BW=13.5MiB/s (14.1MB/s)(67.4MiB/5002msec) 00:33:34.711 slat (nsec): min=4443, max=60753, avg=14669.31, stdev=6590.24 00:33:34.711 clat (usec): min=1537, max=9096, avg=4589.49, stdev=730.15 00:33:34.711 lat (usec): min=1547, max=9109, avg=4604.16, stdev=730.31 00:33:34.711 clat percentiles (usec): 00:33:34.711 | 1.00th=[ 3032], 5.00th=[ 3654], 10.00th=[ 3884], 20.00th=[ 4113], 00:33:34.711 | 30.00th=[ 4228], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4555], 00:33:34.711 | 70.00th=[ 4686], 80.00th=[ 4817], 90.00th=[ 5538], 95.00th=[ 6259], 00:33:34.711 | 99.00th=[ 6980], 99.50th=[ 7177], 99.90th=[ 7963], 99.95th=[ 8979], 00:33:34.711 | 99.99th=[ 9110] 00:33:34.711 bw ( KiB/s): min=13344, max=14192, per=25.08%, avg=13863.11, stdev=269.29, samples=9 00:33:34.711 iops : min= 1668, max= 1774, avg=1732.89, stdev=33.66, samples=9 00:33:34.711 lat (msec) : 2=0.07%, 4=13.16%, 10=86.77% 00:33:34.711 cpu : usr=94.16%, sys=5.34%, ctx=9, majf=0, minf=45 00:33:34.711 IO depths : 1=0.2%, 2=6.1%, 4=66.3%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:34.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.711 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.711 issued rwts: total=8630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.711 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:34.711 00:33:34.711 Run status group 0 (all jobs): 00:33:34.711 READ: bw=54.0MiB/s (56.6MB/s), 13.3MiB/s-13.8MiB/s (14.0MB/s-14.5MB/s), io=270MiB (283MB), run=5001-5004msec 00:33:34.711 18:20:22 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:34.711 18:20:22 -- target/dif.sh@43 -- # local sub 00:33:34.711 18:20:22 -- target/dif.sh@45 -- # for sub in "$@" 00:33:34.711 18:20:22 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:34.711 18:20:22 -- target/dif.sh@36 -- # local sub_id=0 00:33:34.711 18:20:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:34.711 18:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:34.711 18:20:22 -- common/autotest_common.sh@10 -- # set +x 00:33:34.711 18:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:34.711 18:20:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:34.711 18:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:34.711 18:20:22 -- common/autotest_common.sh@10 -- # set +x 00:33:34.711 18:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:34.711 18:20:22 -- target/dif.sh@45 -- # for sub in "$@" 00:33:34.711 18:20:22 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:34.711 18:20:22 -- target/dif.sh@36 -- # local sub_id=1 00:33:34.711 18:20:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:34.711 18:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:34.711 18:20:22 -- common/autotest_common.sh@10 -- # set +x 00:33:34.711 18:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:34.711 18:20:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:34.711 18:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:34.711 18:20:22 -- common/autotest_common.sh@10 -- # set +x 00:33:34.711 18:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:34.711 00:33:34.711 real 0m24.639s 00:33:34.711 user 4m33.447s 00:33:34.711 sys 0m7.339s 00:33:34.711 18:20:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:34.711 18:20:22 -- common/autotest_common.sh@10 -- # set +x 00:33:34.711 ************************************ 00:33:34.711 END TEST fio_dif_rand_params 00:33:34.711 ************************************ 00:33:34.711 18:20:23 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:34.711 18:20:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:34.711 18:20:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:34.711 18:20:23 -- common/autotest_common.sh@10 -- # set +x 00:33:34.711 ************************************ 00:33:34.711 START TEST fio_dif_digest 00:33:34.711 ************************************ 00:33:34.711 18:20:23 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:33:34.711 18:20:23 -- target/dif.sh@123 -- # local NULL_DIF 00:33:34.712 18:20:23 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:34.712 18:20:23 -- target/dif.sh@125 -- # local hdgst ddgst 00:33:34.712 18:20:23 -- target/dif.sh@127 -- # NULL_DIF=3 00:33:34.712 18:20:23 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:34.712 18:20:23 -- target/dif.sh@127 -- # numjobs=3 00:33:34.712 18:20:23 -- target/dif.sh@127 -- # iodepth=3 00:33:34.712 18:20:23 -- target/dif.sh@127 -- # runtime=10 00:33:34.712 18:20:23 -- target/dif.sh@128 -- # hdgst=true 00:33:34.712 18:20:23 -- target/dif.sh@128 -- # ddgst=true 00:33:34.712 18:20:23 -- target/dif.sh@130 -- # create_subsystems 0 00:33:34.712 18:20:23 -- target/dif.sh@28 -- # local sub 00:33:34.712 18:20:23 -- target/dif.sh@30 -- # for sub in "$@" 00:33:34.712 18:20:23 -- target/dif.sh@31 -- # create_subsystem 0 00:33:34.712 18:20:23 -- target/dif.sh@18 -- # local sub_id=0 00:33:34.712 18:20:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:34.712 18:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:34.712 18:20:23 -- common/autotest_common.sh@10 -- # set +x 00:33:34.712 bdev_null0 00:33:34.712 18:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:34.712 18:20:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:34.712 18:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:34.712 18:20:23 -- common/autotest_common.sh@10 -- # set +x 00:33:34.712 18:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:34.712 18:20:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:34.712 18:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:34.712 18:20:23 -- common/autotest_common.sh@10 -- # set +x 00:33:34.712 18:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:34.712 18:20:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:34.712 18:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:34.712 18:20:23 -- common/autotest_common.sh@10 -- # set +x 00:33:34.712 [2024-04-15 18:20:23.157655] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.712 18:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:34.712 18:20:23 -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:34.712 18:20:23 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:34.712 18:20:23 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:34.712 18:20:23 -- nvmf/common.sh@521 -- # config=() 00:33:34.712 18:20:23 -- nvmf/common.sh@521 -- # local subsystem config 00:33:34.712 18:20:23 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:33:34.712 18:20:23 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:33:34.712 { 00:33:34.712 "params": { 00:33:34.712 "name": "Nvme$subsystem", 00:33:34.712 "trtype": "$TEST_TRANSPORT", 00:33:34.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:34.712 "adrfam": "ipv4", 00:33:34.712 "trsvcid": "$NVMF_PORT", 00:33:34.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:34.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:34.712 "hdgst": ${hdgst:-false}, 00:33:34.712 "ddgst": ${ddgst:-false} 00:33:34.712 }, 00:33:34.712 "method": "bdev_nvme_attach_controller" 00:33:34.712 } 00:33:34.712 EOF 00:33:34.712 )") 00:33:34.712 18:20:23 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:34.712 18:20:23 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:34.712 18:20:23 -- target/dif.sh@82 -- # gen_fio_conf 00:33:34.712 18:20:23 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:33:34.712 18:20:23 -- target/dif.sh@54 -- # local file 00:33:34.712 18:20:23 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:34.712 18:20:23 -- common/autotest_common.sh@1325 -- # local sanitizers 00:33:34.712 18:20:23 -- target/dif.sh@56 -- # cat 00:33:34.712 18:20:23 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:34.712 18:20:23 -- common/autotest_common.sh@1327 -- # shift 00:33:34.712 18:20:23 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:33:34.712 18:20:23 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:34.712 18:20:23 -- nvmf/common.sh@543 -- # cat 00:33:34.712 18:20:23 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:34.712 18:20:23 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:34.712 18:20:23 -- target/dif.sh@72 -- # (( file <= files )) 00:33:34.712 18:20:23 -- common/autotest_common.sh@1331 -- # grep libasan 00:33:34.712 18:20:23 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:34.712 18:20:23 -- nvmf/common.sh@545 -- # jq . 00:33:34.712 18:20:23 -- nvmf/common.sh@546 -- # IFS=, 00:33:34.712 18:20:23 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:33:34.712 "params": { 00:33:34.712 "name": "Nvme0", 00:33:34.712 "trtype": "tcp", 00:33:34.712 "traddr": "10.0.0.2", 00:33:34.712 "adrfam": "ipv4", 00:33:34.712 "trsvcid": "4420", 00:33:34.712 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:34.712 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:34.712 "hdgst": true, 00:33:34.712 "ddgst": true 00:33:34.712 }, 00:33:34.712 "method": "bdev_nvme_attach_controller" 00:33:34.712 }' 00:33:34.712 18:20:23 -- common/autotest_common.sh@1331 -- # asan_lib= 00:33:34.712 18:20:23 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:33:34.712 18:20:23 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:34.712 18:20:23 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:34.712 18:20:23 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:33:34.712 18:20:23 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:34.712 18:20:23 -- common/autotest_common.sh@1331 -- # asan_lib= 00:33:34.712 18:20:23 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:33:34.712 18:20:23 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:34.712 18:20:23 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:34.712 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:34.712 ... 00:33:34.712 fio-3.35 00:33:34.712 Starting 3 threads 00:33:34.712 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.970 [2024-04-15 18:20:23.891434] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:34.970 [2024-04-15 18:20:23.891519] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:47.163 00:33:47.163 filename0: (groupid=0, jobs=1): err= 0: pid=3478130: Mon Apr 15 18:20:34 2024 00:33:47.163 read: IOPS=192, BW=24.1MiB/s (25.2MB/s)(242MiB/10046msec) 00:33:47.163 slat (nsec): min=4781, max=39091, avg=20407.73, stdev=3680.36 00:33:47.163 clat (usec): min=9537, max=55937, avg=15536.14, stdev=2785.67 00:33:47.163 lat (usec): min=9558, max=55957, avg=15556.55, stdev=2785.67 00:33:47.163 clat percentiles (usec): 00:33:47.163 | 1.00th=[12518], 5.00th=[13566], 10.00th=[13829], 20.00th=[14353], 00:33:47.163 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:33:47.163 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17433], 00:33:47.163 | 99.00th=[18482], 99.50th=[19792], 99.90th=[55313], 99.95th=[55837], 00:33:47.163 | 99.99th=[55837] 00:33:47.163 bw ( KiB/s): min=22272, max=26624, per=34.23%, avg=24716.80, stdev=1001.78, samples=20 00:33:47.163 iops : min= 174, max= 208, avg=193.10, stdev= 7.83, samples=20 00:33:47.163 lat (msec) : 10=0.10%, 20=99.48%, 100=0.41% 00:33:47.163 cpu : usr=94.85%, sys=4.66%, ctx=29, majf=0, minf=85 00:33:47.163 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:47.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.163 issued rwts: total=1934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.163 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:47.163 filename0: (groupid=0, jobs=1): err= 0: pid=3478131: Mon Apr 15 18:20:34 2024 00:33:47.163 read: IOPS=187, BW=23.4MiB/s (24.5MB/s)(235MiB/10050msec) 00:33:47.163 slat (usec): min=5, max=175, avg=21.75, stdev= 8.14 00:33:47.163 clat (usec): min=9077, max=57425, avg=15985.97, stdev=2375.81 00:33:47.163 lat (usec): min=9103, max=57462, avg=16007.72, stdev=2375.97 00:33:47.163 clat percentiles (usec): 00:33:47.163 | 1.00th=[11469], 5.00th=[13960], 10.00th=[14484], 20.00th=[15008], 00:33:47.163 | 30.00th=[15401], 40.00th=[15664], 50.00th=[15926], 60.00th=[16188], 00:33:47.163 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17433], 95.00th=[17957], 00:33:47.163 | 99.00th=[19006], 99.50th=[19530], 99.90th=[56886], 99.95th=[57410], 00:33:47.163 | 99.99th=[57410] 00:33:47.163 bw ( KiB/s): min=22016, max=25344, per=33.28%, avg=24028.00, stdev=811.70, samples=20 00:33:47.163 iops : min= 172, max= 198, avg=187.70, stdev= 6.33, samples=20 00:33:47.163 lat (msec) : 10=0.05%, 20=99.63%, 50=0.05%, 100=0.27% 00:33:47.163 cpu : usr=88.42%, sys=8.04%, ctx=686, majf=0, minf=170 00:33:47.163 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:47.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.163 issued rwts: total=1880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.163 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:47.163 filename0: (groupid=0, jobs=1): err= 0: pid=3478132: Mon Apr 15 18:20:34 2024 00:33:47.163 read: IOPS=184, BW=23.1MiB/s (24.2MB/s)(232MiB/10051msec) 00:33:47.163 slat (nsec): min=4873, max=47641, avg=18255.01, stdev=4023.94 00:33:47.163 clat (usec): min=10451, max=51487, avg=16200.74, stdev=1729.28 00:33:47.163 lat (usec): min=10471, max=51504, avg=16218.99, stdev=1729.39 00:33:47.163 clat percentiles (usec): 00:33:47.163 | 1.00th=[11863], 5.00th=[14222], 10.00th=[14746], 20.00th=[15270], 00:33:47.163 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16188], 60.00th=[16450], 00:33:47.163 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:33:47.163 | 99.00th=[19006], 99.50th=[19530], 99.90th=[51643], 99.95th=[51643], 00:33:47.163 | 99.99th=[51643] 00:33:47.163 bw ( KiB/s): min=23040, max=24832, per=32.85%, avg=23718.40, stdev=449.39, samples=20 00:33:47.163 iops : min= 180, max= 194, avg=185.30, stdev= 3.51, samples=20 00:33:47.163 lat (msec) : 20=99.68%, 50=0.22%, 100=0.11% 00:33:47.163 cpu : usr=94.85%, sys=4.66%, ctx=25, majf=0, minf=99 00:33:47.163 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:47.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.163 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.163 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:47.163 00:33:47.163 Run status group 0 (all jobs): 00:33:47.163 READ: bw=70.5MiB/s (73.9MB/s), 23.1MiB/s-24.1MiB/s (24.2MB/s-25.2MB/s), io=709MiB (743MB), run=10046-10051msec 00:33:47.163 18:20:34 -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:47.163 18:20:34 -- target/dif.sh@43 -- # local sub 00:33:47.163 18:20:34 -- target/dif.sh@45 -- # for sub in "$@" 00:33:47.163 18:20:34 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:47.163 18:20:34 -- target/dif.sh@36 -- # local sub_id=0 00:33:47.163 18:20:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:47.163 18:20:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:47.163 18:20:34 -- common/autotest_common.sh@10 -- # set +x 00:33:47.163 18:20:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:47.163 18:20:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:47.163 18:20:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:47.163 18:20:34 -- common/autotest_common.sh@10 -- # set +x 00:33:47.163 18:20:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:47.163 00:33:47.163 real 0m11.186s 00:33:47.163 user 0m29.018s 00:33:47.163 sys 0m2.029s 00:33:47.163 18:20:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:47.163 18:20:34 -- common/autotest_common.sh@10 -- # set +x 00:33:47.163 ************************************ 00:33:47.163 END TEST fio_dif_digest 00:33:47.163 ************************************ 00:33:47.163 18:20:34 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:47.163 18:20:34 -- target/dif.sh@147 -- # nvmftestfini 00:33:47.163 18:20:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:33:47.163 18:20:34 -- nvmf/common.sh@117 -- # sync 00:33:47.163 18:20:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:47.163 18:20:34 -- nvmf/common.sh@120 -- # set +e 00:33:47.163 18:20:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:47.163 18:20:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:47.163 rmmod nvme_tcp 00:33:47.163 rmmod nvme_fabrics 00:33:47.163 rmmod nvme_keyring 00:33:47.163 18:20:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:47.163 18:20:34 -- nvmf/common.sh@124 -- # set -e 00:33:47.163 18:20:34 -- nvmf/common.sh@125 -- # return 0 00:33:47.163 18:20:34 -- nvmf/common.sh@478 -- # '[' -n 3472005 ']' 00:33:47.163 18:20:34 -- nvmf/common.sh@479 -- # killprocess 3472005 00:33:47.163 18:20:34 -- common/autotest_common.sh@936 -- # '[' -z 3472005 ']' 00:33:47.163 18:20:34 -- common/autotest_common.sh@940 -- # kill -0 3472005 00:33:47.163 18:20:34 -- common/autotest_common.sh@941 -- # uname 00:33:47.163 18:20:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:47.163 18:20:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3472005 00:33:47.163 18:20:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:47.163 18:20:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:47.163 18:20:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3472005' 00:33:47.163 killing process with pid 3472005 00:33:47.163 18:20:34 -- common/autotest_common.sh@955 -- # kill 3472005 00:33:47.163 18:20:34 -- common/autotest_common.sh@960 -- # wait 3472005 00:33:47.163 18:20:34 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:33:47.163 18:20:34 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:47.163 Waiting for block devices as requested 00:33:47.163 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:33:47.163 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:47.163 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:47.163 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:47.163 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:47.163 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:47.421 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:47.421 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:47.421 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:47.421 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:47.679 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:47.679 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:47.679 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:47.679 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:47.937 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:47.937 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:47.937 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:48.195 18:20:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:33:48.195 18:20:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:33:48.195 18:20:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:48.195 18:20:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:48.195 18:20:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.195 18:20:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:48.195 18:20:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.095 18:20:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:50.095 00:33:50.095 real 1m7.618s 00:33:50.095 user 6m30.478s 00:33:50.095 sys 0m19.508s 00:33:50.095 18:20:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:50.095 18:20:38 -- common/autotest_common.sh@10 -- # set +x 00:33:50.095 ************************************ 00:33:50.095 END TEST nvmf_dif 00:33:50.095 ************************************ 00:33:50.095 18:20:38 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:50.095 18:20:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:50.095 18:20:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:50.095 18:20:38 -- common/autotest_common.sh@10 -- # set +x 00:33:50.354 ************************************ 00:33:50.354 START TEST nvmf_abort_qd_sizes 00:33:50.354 ************************************ 00:33:50.354 18:20:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:50.354 * Looking for test storage... 00:33:50.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:50.354 18:20:39 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:50.354 18:20:39 -- nvmf/common.sh@7 -- # uname -s 00:33:50.354 18:20:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:50.354 18:20:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:50.354 18:20:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:50.354 18:20:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:50.354 18:20:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:50.354 18:20:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:50.354 18:20:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:50.354 18:20:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:50.354 18:20:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:50.354 18:20:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:50.354 18:20:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:50.354 18:20:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:50.354 18:20:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:50.354 18:20:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:50.354 18:20:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:50.354 18:20:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:50.354 18:20:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:50.354 18:20:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.354 18:20:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.354 18:20:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.354 18:20:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.354 18:20:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.354 18:20:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.354 18:20:39 -- paths/export.sh@5 -- # export PATH 00:33:50.354 18:20:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.354 18:20:39 -- nvmf/common.sh@47 -- # : 0 00:33:50.354 18:20:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:50.354 18:20:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:50.354 18:20:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:50.354 18:20:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:50.354 18:20:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:50.354 18:20:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:50.354 18:20:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:50.354 18:20:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:50.354 18:20:39 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:33:50.354 18:20:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:33:50.354 18:20:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:50.354 18:20:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:33:50.354 18:20:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:33:50.354 18:20:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:33:50.354 18:20:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.354 18:20:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:50.354 18:20:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.354 18:20:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:33:50.354 18:20:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:33:50.354 18:20:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:33:50.354 18:20:39 -- common/autotest_common.sh@10 -- # set +x 00:33:52.253 18:20:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:52.253 18:20:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:33:52.253 18:20:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:52.253 18:20:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:52.253 18:20:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:52.253 18:20:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:52.253 18:20:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:52.253 18:20:41 -- nvmf/common.sh@295 -- # net_devs=() 00:33:52.253 18:20:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:52.253 18:20:41 -- nvmf/common.sh@296 -- # e810=() 00:33:52.253 18:20:41 -- nvmf/common.sh@296 -- # local -ga e810 00:33:52.253 18:20:41 -- nvmf/common.sh@297 -- # x722=() 00:33:52.254 18:20:41 -- nvmf/common.sh@297 -- # local -ga x722 00:33:52.254 18:20:41 -- nvmf/common.sh@298 -- # mlx=() 00:33:52.254 18:20:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:33:52.254 18:20:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:52.254 18:20:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:52.254 18:20:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:52.254 18:20:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:52.254 18:20:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:52.254 18:20:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:52.254 18:20:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:52.254 18:20:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:52.254 18:20:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:52.254 18:20:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:52.254 18:20:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:52.254 18:20:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:52.254 18:20:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:52.254 18:20:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:52.254 18:20:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:52.254 18:20:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:52.254 18:20:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:52.254 18:20:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:52.254 18:20:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:33:52.254 Found 0000:84:00.0 (0x8086 - 0x159b) 00:33:52.254 18:20:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:52.254 18:20:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:52.254 18:20:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.254 18:20:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.254 18:20:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:52.254 18:20:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:52.254 18:20:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:33:52.254 Found 0000:84:00.1 (0x8086 - 0x159b) 00:33:52.254 18:20:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:52.254 18:20:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:52.254 18:20:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.254 18:20:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.254 18:20:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:52.254 18:20:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:52.254 18:20:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:52.254 18:20:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:52.254 18:20:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:52.254 18:20:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.254 18:20:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:33:52.254 18:20:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.254 18:20:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:33:52.254 Found net devices under 0000:84:00.0: cvl_0_0 00:33:52.254 18:20:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.254 18:20:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:52.254 18:20:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.254 18:20:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:33:52.254 18:20:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.254 18:20:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:33:52.254 Found net devices under 0000:84:00.1: cvl_0_1 00:33:52.254 18:20:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.254 18:20:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:33:52.254 18:20:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:33:52.254 18:20:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:33:52.254 18:20:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:33:52.254 18:20:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:33:52.254 18:20:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:52.254 18:20:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:52.254 18:20:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:52.254 18:20:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:52.254 18:20:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:52.254 18:20:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:52.254 18:20:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:52.254 18:20:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:52.254 18:20:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:52.254 18:20:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:52.254 18:20:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:52.254 18:20:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:52.254 18:20:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:52.512 18:20:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:52.512 18:20:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:52.512 18:20:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:52.512 18:20:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:52.512 18:20:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:52.512 18:20:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:52.512 18:20:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:52.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:52.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:33:52.512 00:33:52.512 --- 10.0.0.2 ping statistics --- 00:33:52.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.512 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:33:52.512 18:20:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:52.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:52.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:33:52.512 00:33:52.512 --- 10.0.0.1 ping statistics --- 00:33:52.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.512 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:33:52.512 18:20:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:52.512 18:20:41 -- nvmf/common.sh@411 -- # return 0 00:33:52.512 18:20:41 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:33:52.512 18:20:41 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:53.886 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:53.886 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:53.886 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:53.886 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:53.886 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:53.886 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:53.886 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:53.886 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:53.886 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:53.886 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:53.886 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:53.886 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:53.886 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:53.886 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:53.886 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:53.886 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:54.822 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:33:54.822 18:20:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:54.822 18:20:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:33:54.822 18:20:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:33:54.822 18:20:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:54.822 18:20:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:33:54.822 18:20:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:33:54.822 18:20:43 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:54.822 18:20:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:33:54.822 18:20:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:54.822 18:20:43 -- common/autotest_common.sh@10 -- # set +x 00:33:54.822 18:20:43 -- nvmf/common.sh@470 -- # nvmfpid=3482944 00:33:54.822 18:20:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:54.822 18:20:43 -- nvmf/common.sh@471 -- # waitforlisten 3482944 00:33:54.822 18:20:43 -- common/autotest_common.sh@817 -- # '[' -z 3482944 ']' 00:33:54.822 18:20:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.822 18:20:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:54.822 18:20:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.822 18:20:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:54.822 18:20:43 -- common/autotest_common.sh@10 -- # set +x 00:33:54.822 [2024-04-15 18:20:43.639808] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:33:54.822 [2024-04-15 18:20:43.639892] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.822 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.822 [2024-04-15 18:20:43.717025] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:55.088 [2024-04-15 18:20:43.812696] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:55.088 [2024-04-15 18:20:43.812760] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:55.088 [2024-04-15 18:20:43.812779] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:55.088 [2024-04-15 18:20:43.812794] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:55.088 [2024-04-15 18:20:43.812808] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:55.088 [2024-04-15 18:20:43.812891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:55.088 [2024-04-15 18:20:43.812944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:55.088 [2024-04-15 18:20:43.812975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:55.088 [2024-04-15 18:20:43.812978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.088 18:20:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:55.088 18:20:43 -- common/autotest_common.sh@850 -- # return 0 00:33:55.088 18:20:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:33:55.088 18:20:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:55.088 18:20:43 -- common/autotest_common.sh@10 -- # set +x 00:33:55.088 18:20:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:55.088 18:20:43 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:55.088 18:20:43 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:55.088 18:20:43 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:55.088 18:20:43 -- scripts/common.sh@309 -- # local bdf bdfs 00:33:55.088 18:20:43 -- scripts/common.sh@310 -- # local nvmes 00:33:55.088 18:20:43 -- scripts/common.sh@312 -- # [[ -n 0000:82:00.0 ]] 00:33:55.088 18:20:43 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:55.088 18:20:43 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:33:55.088 18:20:43 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:33:55.088 18:20:43 -- scripts/common.sh@320 -- # uname -s 00:33:55.088 18:20:43 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:33:55.088 18:20:43 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:33:55.088 18:20:43 -- scripts/common.sh@325 -- # (( 1 )) 00:33:55.088 18:20:43 -- scripts/common.sh@326 -- # printf '%s\n' 0000:82:00.0 00:33:55.088 18:20:43 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:33:55.088 18:20:43 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:33:55.088 18:20:43 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:55.088 18:20:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:55.088 18:20:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:55.088 18:20:43 -- common/autotest_common.sh@10 -- # set +x 00:33:55.388 ************************************ 00:33:55.388 START TEST spdk_target_abort 00:33:55.388 ************************************ 00:33:55.388 18:20:44 -- common/autotest_common.sh@1111 -- # spdk_target 00:33:55.388 18:20:44 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:55.388 18:20:44 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:33:55.388 18:20:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:55.388 18:20:44 -- common/autotest_common.sh@10 -- # set +x 00:33:58.685 spdk_targetn1 00:33:58.685 18:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:58.685 18:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:58.685 18:20:46 -- common/autotest_common.sh@10 -- # set +x 00:33:58.685 [2024-04-15 18:20:46.940912] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:58.685 18:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:58.685 18:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:58.685 18:20:46 -- common/autotest_common.sh@10 -- # set +x 00:33:58.685 18:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:58.685 18:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:58.685 18:20:46 -- common/autotest_common.sh@10 -- # set +x 00:33:58.685 18:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:58.685 18:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:58.685 18:20:46 -- common/autotest_common.sh@10 -- # set +x 00:33:58.685 [2024-04-15 18:20:46.973188] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.685 18:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:58.685 18:20:46 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:58.685 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.210 Initializing NVMe Controllers 00:34:01.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:01.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:01.210 Initialization complete. Launching workers. 00:34:01.210 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10527, failed: 0 00:34:01.210 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1303, failed to submit 9224 00:34:01.210 success 791, unsuccess 512, failed 0 00:34:01.210 18:20:50 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:01.210 18:20:50 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:01.467 EAL: No free 2048 kB hugepages reported on node 1 00:34:04.743 Initializing NVMe Controllers 00:34:04.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:04.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:04.743 Initialization complete. Launching workers. 00:34:04.743 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8576, failed: 0 00:34:04.743 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1240, failed to submit 7336 00:34:04.743 success 328, unsuccess 912, failed 0 00:34:04.743 18:20:53 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:04.743 18:20:53 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:04.743 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.021 Initializing NVMe Controllers 00:34:08.021 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:08.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:08.021 Initialization complete. Launching workers. 00:34:08.021 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31553, failed: 0 00:34:08.021 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2635, failed to submit 28918 00:34:08.021 success 527, unsuccess 2108, failed 0 00:34:08.021 18:20:56 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:08.021 18:20:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.021 18:20:56 -- common/autotest_common.sh@10 -- # set +x 00:34:08.021 18:20:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.021 18:20:56 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:08.021 18:20:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.021 18:20:56 -- common/autotest_common.sh@10 -- # set +x 00:34:09.394 18:20:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:09.394 18:20:58 -- target/abort_qd_sizes.sh@61 -- # killprocess 3482944 00:34:09.394 18:20:58 -- common/autotest_common.sh@936 -- # '[' -z 3482944 ']' 00:34:09.394 18:20:58 -- common/autotest_common.sh@940 -- # kill -0 3482944 00:34:09.394 18:20:58 -- common/autotest_common.sh@941 -- # uname 00:34:09.394 18:20:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:09.394 18:20:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3482944 00:34:09.394 18:20:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:09.394 18:20:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:09.394 18:20:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3482944' 00:34:09.394 killing process with pid 3482944 00:34:09.394 18:20:58 -- common/autotest_common.sh@955 -- # kill 3482944 00:34:09.394 18:20:58 -- common/autotest_common.sh@960 -- # wait 3482944 00:34:09.652 00:34:09.652 real 0m14.292s 00:34:09.652 user 0m54.270s 00:34:09.652 sys 0m2.962s 00:34:09.652 18:20:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:09.652 18:20:58 -- common/autotest_common.sh@10 -- # set +x 00:34:09.652 ************************************ 00:34:09.652 END TEST spdk_target_abort 00:34:09.652 ************************************ 00:34:09.652 18:20:58 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:09.652 18:20:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:09.652 18:20:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:09.652 18:20:58 -- common/autotest_common.sh@10 -- # set +x 00:34:09.652 ************************************ 00:34:09.652 START TEST kernel_target_abort 00:34:09.652 ************************************ 00:34:09.652 18:20:58 -- common/autotest_common.sh@1111 -- # kernel_target 00:34:09.652 18:20:58 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:09.652 18:20:58 -- nvmf/common.sh@717 -- # local ip 00:34:09.652 18:20:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:34:09.652 18:20:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:34:09.652 18:20:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.652 18:20:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.652 18:20:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:34:09.652 18:20:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.652 18:20:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:34:09.652 18:20:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:34:09.652 18:20:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:34:09.652 18:20:58 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:09.652 18:20:58 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:09.652 18:20:58 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:34:09.652 18:20:58 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:09.652 18:20:58 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:09.652 18:20:58 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:09.652 18:20:58 -- nvmf/common.sh@628 -- # local block nvme 00:34:09.652 18:20:58 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:34:09.652 18:20:58 -- nvmf/common.sh@631 -- # modprobe nvmet 00:34:09.652 18:20:58 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:09.652 18:20:58 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:11.025 Waiting for block devices as requested 00:34:11.025 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:34:11.025 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:11.025 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:11.025 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:11.025 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:11.284 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:11.284 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:11.284 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:11.284 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:11.542 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:11.542 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:11.542 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:11.800 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:11.800 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:11.800 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:11.800 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:12.059 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:12.059 18:21:00 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:34:12.059 18:21:00 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:12.059 18:21:00 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:34:12.059 18:21:00 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:34:12.059 18:21:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:12.059 18:21:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:34:12.059 18:21:00 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:34:12.059 18:21:00 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:34:12.059 18:21:00 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:12.059 No valid GPT data, bailing 00:34:12.059 18:21:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:12.059 18:21:00 -- scripts/common.sh@391 -- # pt= 00:34:12.059 18:21:00 -- scripts/common.sh@392 -- # return 1 00:34:12.059 18:21:00 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:34:12.059 18:21:00 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:34:12.059 18:21:00 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:12.059 18:21:00 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:12.059 18:21:00 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:12.059 18:21:00 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:12.059 18:21:00 -- nvmf/common.sh@656 -- # echo 1 00:34:12.059 18:21:00 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:34:12.059 18:21:00 -- nvmf/common.sh@658 -- # echo 1 00:34:12.059 18:21:00 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:34:12.059 18:21:00 -- nvmf/common.sh@661 -- # echo tcp 00:34:12.059 18:21:00 -- nvmf/common.sh@662 -- # echo 4420 00:34:12.059 18:21:00 -- nvmf/common.sh@663 -- # echo ipv4 00:34:12.059 18:21:00 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:12.059 18:21:01 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:34:12.059 00:34:12.059 Discovery Log Number of Records 2, Generation counter 2 00:34:12.059 =====Discovery Log Entry 0====== 00:34:12.059 trtype: tcp 00:34:12.059 adrfam: ipv4 00:34:12.059 subtype: current discovery subsystem 00:34:12.059 treq: not specified, sq flow control disable supported 00:34:12.059 portid: 1 00:34:12.059 trsvcid: 4420 00:34:12.059 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:12.059 traddr: 10.0.0.1 00:34:12.059 eflags: none 00:34:12.059 sectype: none 00:34:12.059 =====Discovery Log Entry 1====== 00:34:12.059 trtype: tcp 00:34:12.059 adrfam: ipv4 00:34:12.059 subtype: nvme subsystem 00:34:12.059 treq: not specified, sq flow control disable supported 00:34:12.059 portid: 1 00:34:12.059 trsvcid: 4420 00:34:12.059 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:12.059 traddr: 10.0.0.1 00:34:12.059 eflags: none 00:34:12.059 sectype: none 00:34:12.059 18:21:01 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:12.059 18:21:01 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:12.059 18:21:01 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:12.059 18:21:01 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:12.059 18:21:01 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:12.059 18:21:01 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:12.059 18:21:01 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:12.059 18:21:01 -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:12.059 18:21:01 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:12.059 18:21:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:12.059 18:21:01 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:12.059 18:21:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:12.059 18:21:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:12.060 18:21:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:12.060 18:21:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:12.060 18:21:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:12.060 18:21:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:12.060 18:21:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:12.318 18:21:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:12.318 18:21:01 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:12.318 18:21:01 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:12.318 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.624 Initializing NVMe Controllers 00:34:15.624 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:15.624 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:15.624 Initialization complete. Launching workers. 00:34:15.624 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33024, failed: 0 00:34:15.624 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33024, failed to submit 0 00:34:15.624 success 0, unsuccess 33024, failed 0 00:34:15.624 18:21:04 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:15.624 18:21:04 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:15.624 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.919 Initializing NVMe Controllers 00:34:18.919 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:18.919 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:18.919 Initialization complete. Launching workers. 00:34:18.919 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65361, failed: 0 00:34:18.920 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16490, failed to submit 48871 00:34:18.920 success 0, unsuccess 16490, failed 0 00:34:18.920 18:21:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:18.920 18:21:07 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:18.920 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.447 Initializing NVMe Controllers 00:34:21.447 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:21.447 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:21.447 Initialization complete. Launching workers. 00:34:21.447 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63962, failed: 0 00:34:21.447 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15958, failed to submit 48004 00:34:21.447 success 0, unsuccess 15958, failed 0 00:34:21.447 18:21:10 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:21.447 18:21:10 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:21.447 18:21:10 -- nvmf/common.sh@675 -- # echo 0 00:34:21.447 18:21:10 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:21.447 18:21:10 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:21.447 18:21:10 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:21.447 18:21:10 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:21.447 18:21:10 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:34:21.447 18:21:10 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:34:21.447 18:21:10 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:22.823 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:22.823 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:22.823 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:22.823 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:22.823 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:22.823 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:22.823 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:22.823 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:22.823 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:22.823 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:22.823 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:22.823 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:22.823 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:22.823 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:22.823 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:22.823 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:23.757 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:34:23.757 00:34:23.757 real 0m14.090s 00:34:23.758 user 0m5.150s 00:34:23.758 sys 0m3.447s 00:34:23.758 18:21:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:23.758 18:21:12 -- common/autotest_common.sh@10 -- # set +x 00:34:23.758 ************************************ 00:34:23.758 END TEST kernel_target_abort 00:34:23.758 ************************************ 00:34:23.758 18:21:12 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:23.758 18:21:12 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:23.758 18:21:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:34:23.758 18:21:12 -- nvmf/common.sh@117 -- # sync 00:34:23.758 18:21:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:23.758 18:21:12 -- nvmf/common.sh@120 -- # set +e 00:34:23.758 18:21:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:23.758 18:21:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:23.758 rmmod nvme_tcp 00:34:23.758 rmmod nvme_fabrics 00:34:23.758 rmmod nvme_keyring 00:34:23.758 18:21:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:23.758 18:21:12 -- nvmf/common.sh@124 -- # set -e 00:34:23.758 18:21:12 -- nvmf/common.sh@125 -- # return 0 00:34:23.758 18:21:12 -- nvmf/common.sh@478 -- # '[' -n 3482944 ']' 00:34:23.758 18:21:12 -- nvmf/common.sh@479 -- # killprocess 3482944 00:34:23.758 18:21:12 -- common/autotest_common.sh@936 -- # '[' -z 3482944 ']' 00:34:23.758 18:21:12 -- common/autotest_common.sh@940 -- # kill -0 3482944 00:34:23.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3482944) - No such process 00:34:23.758 18:21:12 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3482944 is not found' 00:34:23.758 Process with pid 3482944 is not found 00:34:23.758 18:21:12 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:34:23.758 18:21:12 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:25.132 Waiting for block devices as requested 00:34:25.132 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:34:25.132 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:25.390 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:25.390 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:25.390 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:25.390 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:25.390 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:25.650 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:25.650 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:25.650 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:25.650 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:25.908 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:25.908 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:25.908 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:26.167 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:26.167 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:26.167 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:26.425 18:21:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:34:26.425 18:21:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:34:26.425 18:21:15 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:26.425 18:21:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:26.425 18:21:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:26.425 18:21:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:26.425 18:21:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.324 18:21:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:28.324 00:34:28.324 real 0m38.084s 00:34:28.324 user 1m1.579s 00:34:28.324 sys 0m10.026s 00:34:28.324 18:21:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:28.324 18:21:17 -- common/autotest_common.sh@10 -- # set +x 00:34:28.324 ************************************ 00:34:28.324 END TEST nvmf_abort_qd_sizes 00:34:28.324 ************************************ 00:34:28.324 18:21:17 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:28.324 18:21:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:28.324 18:21:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:28.324 18:21:17 -- common/autotest_common.sh@10 -- # set +x 00:34:28.583 ************************************ 00:34:28.583 START TEST keyring_file 00:34:28.583 ************************************ 00:34:28.583 18:21:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:28.583 * Looking for test storage... 00:34:28.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:28.583 18:21:17 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:28.583 18:21:17 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:28.583 18:21:17 -- nvmf/common.sh@7 -- # uname -s 00:34:28.583 18:21:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:28.583 18:21:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:28.583 18:21:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:28.583 18:21:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:28.583 18:21:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:28.583 18:21:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:28.583 18:21:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:28.583 18:21:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:28.583 18:21:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:28.583 18:21:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:28.583 18:21:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:28.583 18:21:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:28.583 18:21:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:28.583 18:21:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:28.583 18:21:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:28.583 18:21:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:28.583 18:21:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:28.583 18:21:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:28.583 18:21:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:28.583 18:21:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:28.583 18:21:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.583 18:21:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.583 18:21:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.583 18:21:17 -- paths/export.sh@5 -- # export PATH 00:34:28.583 18:21:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.583 18:21:17 -- nvmf/common.sh@47 -- # : 0 00:34:28.583 18:21:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:28.583 18:21:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:28.583 18:21:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:28.583 18:21:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:28.583 18:21:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:28.583 18:21:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:28.583 18:21:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:28.583 18:21:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:28.583 18:21:17 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:28.583 18:21:17 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:28.583 18:21:17 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:28.583 18:21:17 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:28.583 18:21:17 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:28.583 18:21:17 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:28.583 18:21:17 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:28.583 18:21:17 -- keyring/common.sh@15 -- # local name key digest path 00:34:28.583 18:21:17 -- keyring/common.sh@17 -- # name=key0 00:34:28.583 18:21:17 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:28.583 18:21:17 -- keyring/common.sh@17 -- # digest=0 00:34:28.583 18:21:17 -- keyring/common.sh@18 -- # mktemp 00:34:28.583 18:21:17 -- keyring/common.sh@18 -- # path=/tmp/tmp.56K3i68sEo 00:34:28.583 18:21:17 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:28.583 18:21:17 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:28.583 18:21:17 -- nvmf/common.sh@691 -- # local prefix key digest 00:34:28.583 18:21:17 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:34:28.583 18:21:17 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:34:28.583 18:21:17 -- nvmf/common.sh@693 -- # digest=0 00:34:28.584 18:21:17 -- nvmf/common.sh@694 -- # python - 00:34:28.584 18:21:17 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.56K3i68sEo 00:34:28.584 18:21:17 -- keyring/common.sh@23 -- # echo /tmp/tmp.56K3i68sEo 00:34:28.584 18:21:17 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.56K3i68sEo 00:34:28.584 18:21:17 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:28.584 18:21:17 -- keyring/common.sh@15 -- # local name key digest path 00:34:28.584 18:21:17 -- keyring/common.sh@17 -- # name=key1 00:34:28.584 18:21:17 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:28.584 18:21:17 -- keyring/common.sh@17 -- # digest=0 00:34:28.584 18:21:17 -- keyring/common.sh@18 -- # mktemp 00:34:28.584 18:21:17 -- keyring/common.sh@18 -- # path=/tmp/tmp.DYureLPpg0 00:34:28.584 18:21:17 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:28.584 18:21:17 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:28.584 18:21:17 -- nvmf/common.sh@691 -- # local prefix key digest 00:34:28.584 18:21:17 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:34:28.584 18:21:17 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:34:28.584 18:21:17 -- nvmf/common.sh@693 -- # digest=0 00:34:28.584 18:21:17 -- nvmf/common.sh@694 -- # python - 00:34:28.584 18:21:17 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DYureLPpg0 00:34:28.842 18:21:17 -- keyring/common.sh@23 -- # echo /tmp/tmp.DYureLPpg0 00:34:28.842 18:21:17 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.DYureLPpg0 00:34:28.842 18:21:17 -- keyring/file.sh@30 -- # tgtpid=3489345 00:34:28.842 18:21:17 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:28.842 18:21:17 -- keyring/file.sh@32 -- # waitforlisten 3489345 00:34:28.842 18:21:17 -- common/autotest_common.sh@817 -- # '[' -z 3489345 ']' 00:34:28.842 18:21:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.842 18:21:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:28.843 18:21:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.843 18:21:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:28.843 18:21:17 -- common/autotest_common.sh@10 -- # set +x 00:34:28.843 [2024-04-15 18:21:17.593984] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:34:28.843 [2024-04-15 18:21:17.594108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3489345 ] 00:34:28.843 EAL: No free 2048 kB hugepages reported on node 1 00:34:28.843 [2024-04-15 18:21:17.664128] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.843 [2024-04-15 18:21:17.755693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.101 18:21:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:29.101 18:21:18 -- common/autotest_common.sh@850 -- # return 0 00:34:29.101 18:21:18 -- keyring/file.sh@33 -- # rpc_cmd 00:34:29.101 18:21:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:29.101 18:21:18 -- common/autotest_common.sh@10 -- # set +x 00:34:29.101 [2024-04-15 18:21:18.023988] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:29.101 null0 00:34:29.359 [2024-04-15 18:21:18.056031] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:29.359 [2024-04-15 18:21:18.056593] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:29.359 [2024-04-15 18:21:18.064049] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:34:29.359 18:21:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:29.359 18:21:18 -- keyring/file.sh@43 -- # bperfpid=3489352 00:34:29.359 18:21:18 -- keyring/file.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:29.359 18:21:18 -- keyring/file.sh@45 -- # waitforlisten 3489352 /var/tmp/bperf.sock 00:34:29.359 18:21:18 -- common/autotest_common.sh@817 -- # '[' -z 3489352 ']' 00:34:29.359 18:21:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:29.359 18:21:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:29.359 18:21:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:29.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:29.359 18:21:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:29.359 18:21:18 -- common/autotest_common.sh@10 -- # set +x 00:34:29.359 [2024-04-15 18:21:18.111427] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:34:29.359 [2024-04-15 18:21:18.111501] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3489352 ] 00:34:29.359 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.359 [2024-04-15 18:21:18.178439] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.359 [2024-04-15 18:21:18.270808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:29.616 18:21:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:29.616 18:21:18 -- common/autotest_common.sh@850 -- # return 0 00:34:29.616 18:21:18 -- keyring/file.sh@46 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.56K3i68sEo 00:34:29.616 18:21:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.56K3i68sEo 00:34:29.873 18:21:18 -- keyring/file.sh@47 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.DYureLPpg0 00:34:29.873 18:21:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.DYureLPpg0 00:34:30.131 18:21:19 -- keyring/file.sh@48 -- # get_key key0 00:34:30.131 18:21:19 -- keyring/file.sh@48 -- # jq -r .path 00:34:30.131 18:21:19 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:30.131 18:21:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:30.131 18:21:19 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:30.697 18:21:19 -- keyring/file.sh@48 -- # [[ /tmp/tmp.56K3i68sEo == \/\t\m\p\/\t\m\p\.\5\6\K\3\i\6\8\s\E\o ]] 00:34:30.697 18:21:19 -- keyring/file.sh@49 -- # get_key key1 00:34:30.697 18:21:19 -- keyring/file.sh@49 -- # jq -r .path 00:34:30.697 18:21:19 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:30.697 18:21:19 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:30.697 18:21:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:30.955 18:21:19 -- keyring/file.sh@49 -- # [[ /tmp/tmp.DYureLPpg0 == \/\t\m\p\/\t\m\p\.\D\Y\u\r\e\L\P\p\g\0 ]] 00:34:30.955 18:21:19 -- keyring/file.sh@50 -- # get_refcnt key0 00:34:30.955 18:21:19 -- keyring/common.sh@12 -- # get_key key0 00:34:30.955 18:21:19 -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:30.955 18:21:19 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:30.955 18:21:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:30.955 18:21:19 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:31.213 18:21:20 -- keyring/file.sh@50 -- # (( 1 == 1 )) 00:34:31.213 18:21:20 -- keyring/file.sh@51 -- # get_refcnt key1 00:34:31.213 18:21:20 -- keyring/common.sh@12 -- # get_key key1 00:34:31.213 18:21:20 -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:31.213 18:21:20 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:31.213 18:21:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:31.213 18:21:20 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:31.779 18:21:20 -- keyring/file.sh@51 -- # (( 1 == 1 )) 00:34:31.779 18:21:20 -- keyring/file.sh@54 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:31.779 18:21:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:31.779 [2024-04-15 18:21:20.712084] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:32.037 nvme0n1 00:34:32.037 18:21:20 -- keyring/file.sh@56 -- # get_refcnt key0 00:34:32.037 18:21:20 -- keyring/common.sh@12 -- # get_key key0 00:34:32.037 18:21:20 -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:32.037 18:21:20 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:32.037 18:21:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:32.037 18:21:20 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:32.696 18:21:21 -- keyring/file.sh@56 -- # (( 2 == 2 )) 00:34:32.696 18:21:21 -- keyring/file.sh@57 -- # get_refcnt key1 00:34:32.696 18:21:21 -- keyring/common.sh@12 -- # get_key key1 00:34:32.696 18:21:21 -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:32.696 18:21:21 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:32.696 18:21:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:32.696 18:21:21 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:32.953 18:21:21 -- keyring/file.sh@57 -- # (( 1 == 1 )) 00:34:32.953 18:21:21 -- keyring/file.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:32.953 Running I/O for 1 seconds... 00:34:34.325 00:34:34.325 Latency(us) 00:34:34.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:34.325 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:34.325 nvme0n1 : 1.02 4745.57 18.54 0.00 0.00 26647.28 9903.22 34952.53 00:34:34.325 =================================================================================================================== 00:34:34.325 Total : 4745.57 18.54 0.00 0.00 26647.28 9903.22 34952.53 00:34:34.325 0 00:34:34.325 18:21:22 -- keyring/file.sh@61 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:34.325 18:21:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:34.325 18:21:23 -- keyring/file.sh@62 -- # get_refcnt key0 00:34:34.325 18:21:23 -- keyring/common.sh@12 -- # get_key key0 00:34:34.325 18:21:23 -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:34.325 18:21:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:34.325 18:21:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:34.325 18:21:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:34.583 18:21:23 -- keyring/file.sh@62 -- # (( 1 == 1 )) 00:34:34.583 18:21:23 -- keyring/file.sh@63 -- # get_refcnt key1 00:34:34.583 18:21:23 -- keyring/common.sh@12 -- # get_key key1 00:34:34.583 18:21:23 -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:34.583 18:21:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:34.583 18:21:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:34.583 18:21:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:35.149 18:21:24 -- keyring/file.sh@63 -- # (( 1 == 1 )) 00:34:35.149 18:21:24 -- keyring/file.sh@66 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:35.149 18:21:24 -- common/autotest_common.sh@638 -- # local es=0 00:34:35.149 18:21:24 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:35.149 18:21:24 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:34:35.149 18:21:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:34:35.149 18:21:24 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:34:35.149 18:21:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:34:35.149 18:21:24 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:35.149 18:21:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:35.714 [2024-04-15 18:21:24.366569] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:35.714 [2024-04-15 18:21:24.367271] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1222a00 (107): Transport endpoint is not connected 00:34:35.714 [2024-04-15 18:21:24.368262] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1222a00 (9): Bad file descriptor 00:34:35.714 [2024-04-15 18:21:24.369260] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.714 [2024-04-15 18:21:24.369285] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:35.714 [2024-04-15 18:21:24.369303] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.714 request: 00:34:35.714 { 00:34:35.714 "name": "nvme0", 00:34:35.714 "trtype": "tcp", 00:34:35.714 "traddr": "127.0.0.1", 00:34:35.714 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:35.714 "adrfam": "ipv4", 00:34:35.714 "trsvcid": "4420", 00:34:35.714 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:35.714 "psk": "key1", 00:34:35.714 "method": "bdev_nvme_attach_controller", 00:34:35.714 "req_id": 1 00:34:35.714 } 00:34:35.714 Got JSON-RPC error response 00:34:35.714 response: 00:34:35.714 { 00:34:35.714 "code": -32602, 00:34:35.714 "message": "Invalid parameters" 00:34:35.714 } 00:34:35.714 18:21:24 -- common/autotest_common.sh@641 -- # es=1 00:34:35.714 18:21:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:34:35.714 18:21:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:34:35.714 18:21:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:34:35.714 18:21:24 -- keyring/file.sh@68 -- # get_refcnt key0 00:34:35.714 18:21:24 -- keyring/common.sh@12 -- # get_key key0 00:34:35.714 18:21:24 -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:35.714 18:21:24 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:35.714 18:21:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:35.714 18:21:24 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:35.972 18:21:24 -- keyring/file.sh@68 -- # (( 1 == 1 )) 00:34:35.972 18:21:24 -- keyring/file.sh@69 -- # get_refcnt key1 00:34:35.972 18:21:24 -- keyring/common.sh@12 -- # get_key key1 00:34:35.972 18:21:24 -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:35.972 18:21:24 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:35.972 18:21:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:35.972 18:21:24 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:36.229 18:21:25 -- keyring/file.sh@69 -- # (( 1 == 1 )) 00:34:36.229 18:21:25 -- keyring/file.sh@72 -- # bperf_cmd keyring_file_remove_key key0 00:34:36.229 18:21:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:36.487 18:21:25 -- keyring/file.sh@73 -- # bperf_cmd keyring_file_remove_key key1 00:34:36.487 18:21:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:34:36.744 18:21:25 -- keyring/file.sh@74 -- # bperf_cmd keyring_get_keys 00:34:36.744 18:21:25 -- keyring/file.sh@74 -- # jq length 00:34:36.744 18:21:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:37.002 18:21:25 -- keyring/file.sh@74 -- # (( 0 == 0 )) 00:34:37.002 18:21:25 -- keyring/file.sh@77 -- # chmod 0660 /tmp/tmp.56K3i68sEo 00:34:37.002 18:21:25 -- keyring/file.sh@78 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.56K3i68sEo 00:34:37.002 18:21:25 -- common/autotest_common.sh@638 -- # local es=0 00:34:37.002 18:21:25 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.56K3i68sEo 00:34:37.002 18:21:25 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:34:37.002 18:21:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:34:37.002 18:21:25 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:34:37.002 18:21:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:34:37.002 18:21:25 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.56K3i68sEo 00:34:37.002 18:21:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.56K3i68sEo 00:34:37.567 [2024-04-15 18:21:26.434555] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.56K3i68sEo': 0100660 00:34:37.567 [2024-04-15 18:21:26.434599] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:34:37.567 request: 00:34:37.567 { 00:34:37.568 "name": "key0", 00:34:37.568 "path": "/tmp/tmp.56K3i68sEo", 00:34:37.568 "method": "keyring_file_add_key", 00:34:37.568 "req_id": 1 00:34:37.568 } 00:34:37.568 Got JSON-RPC error response 00:34:37.568 response: 00:34:37.568 { 00:34:37.568 "code": -1, 00:34:37.568 "message": "Operation not permitted" 00:34:37.568 } 00:34:37.568 18:21:26 -- common/autotest_common.sh@641 -- # es=1 00:34:37.568 18:21:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:34:37.568 18:21:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:34:37.568 18:21:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:34:37.568 18:21:26 -- keyring/file.sh@81 -- # chmod 0600 /tmp/tmp.56K3i68sEo 00:34:37.568 18:21:26 -- keyring/file.sh@82 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.56K3i68sEo 00:34:37.568 18:21:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.56K3i68sEo 00:34:38.132 18:21:26 -- keyring/file.sh@83 -- # rm -f /tmp/tmp.56K3i68sEo 00:34:38.132 18:21:26 -- keyring/file.sh@85 -- # get_refcnt key0 00:34:38.132 18:21:26 -- keyring/common.sh@12 -- # get_key key0 00:34:38.132 18:21:26 -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:38.132 18:21:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:38.132 18:21:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:38.132 18:21:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:38.389 18:21:27 -- keyring/file.sh@85 -- # (( 1 == 1 )) 00:34:38.389 18:21:27 -- keyring/file.sh@87 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:38.389 18:21:27 -- common/autotest_common.sh@638 -- # local es=0 00:34:38.389 18:21:27 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:38.389 18:21:27 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:34:38.389 18:21:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:34:38.389 18:21:27 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:34:38.389 18:21:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:34:38.389 18:21:27 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:38.389 18:21:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:38.647 [2024-04-15 18:21:27.425170] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.56K3i68sEo': No such file or directory 00:34:38.647 [2024-04-15 18:21:27.425208] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:34:38.647 [2024-04-15 18:21:27.425242] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:34:38.647 [2024-04-15 18:21:27.425256] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:38.647 [2024-04-15 18:21:27.425271] bdev_nvme.c:6183:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:34:38.647 request: 00:34:38.647 { 00:34:38.647 "name": "nvme0", 00:34:38.647 "trtype": "tcp", 00:34:38.647 "traddr": "127.0.0.1", 00:34:38.647 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:38.647 "adrfam": "ipv4", 00:34:38.647 "trsvcid": "4420", 00:34:38.647 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:38.647 "psk": "key0", 00:34:38.647 "method": "bdev_nvme_attach_controller", 00:34:38.647 "req_id": 1 00:34:38.647 } 00:34:38.647 Got JSON-RPC error response 00:34:38.647 response: 00:34:38.647 { 00:34:38.647 "code": -19, 00:34:38.647 "message": "No such device" 00:34:38.647 } 00:34:38.647 18:21:27 -- common/autotest_common.sh@641 -- # es=1 00:34:38.647 18:21:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:34:38.647 18:21:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:34:38.647 18:21:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:34:38.647 18:21:27 -- keyring/file.sh@89 -- # bperf_cmd keyring_file_remove_key key0 00:34:38.647 18:21:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:38.904 18:21:27 -- keyring/file.sh@92 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:38.904 18:21:27 -- keyring/common.sh@15 -- # local name key digest path 00:34:38.904 18:21:27 -- keyring/common.sh@17 -- # name=key0 00:34:38.904 18:21:27 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:38.904 18:21:27 -- keyring/common.sh@17 -- # digest=0 00:34:38.904 18:21:27 -- keyring/common.sh@18 -- # mktemp 00:34:38.904 18:21:27 -- keyring/common.sh@18 -- # path=/tmp/tmp.cMdFsXpaDi 00:34:38.904 18:21:27 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:38.904 18:21:27 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:38.904 18:21:27 -- nvmf/common.sh@691 -- # local prefix key digest 00:34:38.904 18:21:27 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:34:38.904 18:21:27 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:34:38.904 18:21:27 -- nvmf/common.sh@693 -- # digest=0 00:34:38.904 18:21:27 -- nvmf/common.sh@694 -- # python - 00:34:38.904 18:21:27 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cMdFsXpaDi 00:34:38.904 18:21:27 -- keyring/common.sh@23 -- # echo /tmp/tmp.cMdFsXpaDi 00:34:38.904 18:21:27 -- keyring/file.sh@92 -- # key0path=/tmp/tmp.cMdFsXpaDi 00:34:38.904 18:21:27 -- keyring/file.sh@93 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cMdFsXpaDi 00:34:38.904 18:21:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cMdFsXpaDi 00:34:39.160 18:21:28 -- keyring/file.sh@94 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:39.160 18:21:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:39.725 nvme0n1 00:34:39.725 18:21:28 -- keyring/file.sh@96 -- # get_refcnt key0 00:34:39.725 18:21:28 -- keyring/common.sh@12 -- # get_key key0 00:34:39.725 18:21:28 -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:39.725 18:21:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:39.725 18:21:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:39.725 18:21:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:39.982 18:21:28 -- keyring/file.sh@96 -- # (( 2 == 2 )) 00:34:39.982 18:21:28 -- keyring/file.sh@97 -- # bperf_cmd keyring_file_remove_key key0 00:34:39.982 18:21:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:40.547 18:21:29 -- keyring/file.sh@98 -- # get_key key0 00:34:40.547 18:21:29 -- keyring/file.sh@98 -- # jq -r .removed 00:34:40.547 18:21:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:40.547 18:21:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:40.547 18:21:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:41.113 18:21:29 -- keyring/file.sh@98 -- # [[ true == \t\r\u\e ]] 00:34:41.113 18:21:29 -- keyring/file.sh@99 -- # get_refcnt key0 00:34:41.113 18:21:29 -- keyring/common.sh@12 -- # get_key key0 00:34:41.113 18:21:29 -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:41.113 18:21:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:41.113 18:21:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:41.113 18:21:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:41.371 18:21:30 -- keyring/file.sh@99 -- # (( 1 == 1 )) 00:34:41.371 18:21:30 -- keyring/file.sh@100 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:41.371 18:21:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:41.935 18:21:30 -- keyring/file.sh@101 -- # bperf_cmd keyring_get_keys 00:34:41.935 18:21:30 -- keyring/file.sh@101 -- # jq length 00:34:41.936 18:21:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:42.500 18:21:31 -- keyring/file.sh@101 -- # (( 0 == 0 )) 00:34:42.500 18:21:31 -- keyring/file.sh@104 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cMdFsXpaDi 00:34:42.500 18:21:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cMdFsXpaDi 00:34:42.758 18:21:31 -- keyring/file.sh@105 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.DYureLPpg0 00:34:42.758 18:21:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.DYureLPpg0 00:34:43.323 18:21:32 -- keyring/file.sh@106 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:43.323 18:21:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:43.581 nvme0n1 00:34:43.581 18:21:32 -- keyring/file.sh@109 -- # bperf_cmd save_config 00:34:43.581 18:21:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:34:44.147 18:21:32 -- keyring/file.sh@109 -- # config='{ 00:34:44.147 "subsystems": [ 00:34:44.147 { 00:34:44.147 "subsystem": "keyring", 00:34:44.147 "config": [ 00:34:44.147 { 00:34:44.147 "method": "keyring_file_add_key", 00:34:44.147 "params": { 00:34:44.147 "name": "key0", 00:34:44.147 "path": "/tmp/tmp.cMdFsXpaDi" 00:34:44.147 } 00:34:44.147 }, 00:34:44.147 { 00:34:44.147 "method": "keyring_file_add_key", 00:34:44.147 "params": { 00:34:44.147 "name": "key1", 00:34:44.147 "path": "/tmp/tmp.DYureLPpg0" 00:34:44.147 } 00:34:44.147 } 00:34:44.147 ] 00:34:44.147 }, 00:34:44.147 { 00:34:44.147 "subsystem": "iobuf", 00:34:44.147 "config": [ 00:34:44.147 { 00:34:44.147 "method": "iobuf_set_options", 00:34:44.147 "params": { 00:34:44.147 "small_pool_count": 8192, 00:34:44.147 "large_pool_count": 1024, 00:34:44.147 "small_bufsize": 8192, 00:34:44.147 "large_bufsize": 135168 00:34:44.147 } 00:34:44.147 } 00:34:44.147 ] 00:34:44.147 }, 00:34:44.147 { 00:34:44.147 "subsystem": "sock", 00:34:44.147 "config": [ 00:34:44.147 { 00:34:44.147 "method": "sock_impl_set_options", 00:34:44.147 "params": { 00:34:44.147 "impl_name": "posix", 00:34:44.147 "recv_buf_size": 2097152, 00:34:44.147 "send_buf_size": 2097152, 00:34:44.147 "enable_recv_pipe": true, 00:34:44.147 "enable_quickack": false, 00:34:44.147 "enable_placement_id": 0, 00:34:44.147 "enable_zerocopy_send_server": true, 00:34:44.147 "enable_zerocopy_send_client": false, 00:34:44.147 "zerocopy_threshold": 0, 00:34:44.147 "tls_version": 0, 00:34:44.147 "enable_ktls": false 00:34:44.147 } 00:34:44.147 }, 00:34:44.147 { 00:34:44.147 "method": "sock_impl_set_options", 00:34:44.147 "params": { 00:34:44.147 "impl_name": "ssl", 00:34:44.147 "recv_buf_size": 4096, 00:34:44.147 "send_buf_size": 4096, 00:34:44.147 "enable_recv_pipe": true, 00:34:44.147 "enable_quickack": false, 00:34:44.147 "enable_placement_id": 0, 00:34:44.147 "enable_zerocopy_send_server": true, 00:34:44.147 "enable_zerocopy_send_client": false, 00:34:44.147 "zerocopy_threshold": 0, 00:34:44.147 "tls_version": 0, 00:34:44.147 "enable_ktls": false 00:34:44.147 } 00:34:44.147 } 00:34:44.147 ] 00:34:44.147 }, 00:34:44.147 { 00:34:44.147 "subsystem": "vmd", 00:34:44.147 "config": [] 00:34:44.147 }, 00:34:44.147 { 00:34:44.147 "subsystem": "accel", 00:34:44.147 "config": [ 00:34:44.147 { 00:34:44.147 "method": "accel_set_options", 00:34:44.147 "params": { 00:34:44.147 "small_cache_size": 128, 00:34:44.147 "large_cache_size": 16, 00:34:44.147 "task_count": 2048, 00:34:44.147 "sequence_count": 2048, 00:34:44.147 "buf_count": 2048 00:34:44.147 } 00:34:44.147 } 00:34:44.147 ] 00:34:44.147 }, 00:34:44.147 { 00:34:44.147 "subsystem": "bdev", 00:34:44.147 "config": [ 00:34:44.147 { 00:34:44.147 "method": "bdev_set_options", 00:34:44.147 "params": { 00:34:44.147 "bdev_io_pool_size": 65535, 00:34:44.147 "bdev_io_cache_size": 256, 00:34:44.147 "bdev_auto_examine": true, 00:34:44.147 "iobuf_small_cache_size": 128, 00:34:44.147 "iobuf_large_cache_size": 16 00:34:44.147 } 00:34:44.147 }, 00:34:44.148 { 00:34:44.148 "method": "bdev_raid_set_options", 00:34:44.148 "params": { 00:34:44.148 "process_window_size_kb": 1024 00:34:44.148 } 00:34:44.148 }, 00:34:44.148 { 00:34:44.148 "method": "bdev_iscsi_set_options", 00:34:44.148 "params": { 00:34:44.148 "timeout_sec": 30 00:34:44.148 } 00:34:44.148 }, 00:34:44.148 { 00:34:44.148 "method": "bdev_nvme_set_options", 00:34:44.148 "params": { 00:34:44.148 "action_on_timeout": "none", 00:34:44.148 "timeout_us": 0, 00:34:44.148 "timeout_admin_us": 0, 00:34:44.148 "keep_alive_timeout_ms": 10000, 00:34:44.148 "arbitration_burst": 0, 00:34:44.148 "low_priority_weight": 0, 00:34:44.148 "medium_priority_weight": 0, 00:34:44.148 "high_priority_weight": 0, 00:34:44.148 "nvme_adminq_poll_period_us": 10000, 00:34:44.148 "nvme_ioq_poll_period_us": 0, 00:34:44.148 "io_queue_requests": 512, 00:34:44.148 "delay_cmd_submit": true, 00:34:44.148 "transport_retry_count": 4, 00:34:44.148 "bdev_retry_count": 3, 00:34:44.148 "transport_ack_timeout": 0, 00:34:44.148 "ctrlr_loss_timeout_sec": 0, 00:34:44.148 "reconnect_delay_sec": 0, 00:34:44.148 "fast_io_fail_timeout_sec": 0, 00:34:44.148 "disable_auto_failback": false, 00:34:44.148 "generate_uuids": false, 00:34:44.148 "transport_tos": 0, 00:34:44.148 "nvme_error_stat": false, 00:34:44.148 "rdma_srq_size": 0, 00:34:44.148 "io_path_stat": false, 00:34:44.148 "allow_accel_sequence": false, 00:34:44.148 "rdma_max_cq_size": 0, 00:34:44.148 "rdma_cm_event_timeout_ms": 0, 00:34:44.148 "dhchap_digests": [ 00:34:44.148 "sha256", 00:34:44.148 "sha384", 00:34:44.148 "sha512" 00:34:44.148 ], 00:34:44.148 "dhchap_dhgroups": [ 00:34:44.148 "null", 00:34:44.148 "ffdhe2048", 00:34:44.148 "ffdhe3072", 00:34:44.148 "ffdhe4096", 00:34:44.148 "ffdhe6144", 00:34:44.148 "ffdhe8192" 00:34:44.148 ] 00:34:44.148 } 00:34:44.148 }, 00:34:44.148 { 00:34:44.148 "method": "bdev_nvme_attach_controller", 00:34:44.148 "params": { 00:34:44.148 "name": "nvme0", 00:34:44.148 "trtype": "TCP", 00:34:44.148 "adrfam": "IPv4", 00:34:44.148 "traddr": "127.0.0.1", 00:34:44.148 "trsvcid": "4420", 00:34:44.148 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:44.148 "prchk_reftag": false, 00:34:44.148 "prchk_guard": false, 00:34:44.148 "ctrlr_loss_timeout_sec": 0, 00:34:44.148 "reconnect_delay_sec": 0, 00:34:44.148 "fast_io_fail_timeout_sec": 0, 00:34:44.148 "psk": "key0", 00:34:44.148 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:44.148 "hdgst": false, 00:34:44.148 "ddgst": false 00:34:44.148 } 00:34:44.148 }, 00:34:44.148 { 00:34:44.148 "method": "bdev_nvme_set_hotplug", 00:34:44.148 "params": { 00:34:44.148 "period_us": 100000, 00:34:44.148 "enable": false 00:34:44.148 } 00:34:44.148 }, 00:34:44.148 { 00:34:44.148 "method": "bdev_wait_for_examine" 00:34:44.148 } 00:34:44.148 ] 00:34:44.148 }, 00:34:44.148 { 00:34:44.148 "subsystem": "nbd", 00:34:44.148 "config": [] 00:34:44.148 } 00:34:44.148 ] 00:34:44.148 }' 00:34:44.148 18:21:32 -- keyring/file.sh@111 -- # killprocess 3489352 00:34:44.148 18:21:32 -- common/autotest_common.sh@936 -- # '[' -z 3489352 ']' 00:34:44.148 18:21:32 -- common/autotest_common.sh@940 -- # kill -0 3489352 00:34:44.148 18:21:32 -- common/autotest_common.sh@941 -- # uname 00:34:44.148 18:21:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:44.148 18:21:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3489352 00:34:44.148 18:21:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:34:44.148 18:21:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:34:44.148 18:21:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3489352' 00:34:44.148 killing process with pid 3489352 00:34:44.148 18:21:32 -- common/autotest_common.sh@955 -- # kill 3489352 00:34:44.148 Received shutdown signal, test time was about 1.000000 seconds 00:34:44.148 00:34:44.148 Latency(us) 00:34:44.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.148 =================================================================================================================== 00:34:44.148 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:44.148 18:21:32 -- common/autotest_common.sh@960 -- # wait 3489352 00:34:44.407 18:21:33 -- keyring/file.sh@114 -- # bperfpid=3491209 00:34:44.407 18:21:33 -- keyring/file.sh@116 -- # waitforlisten 3491209 /var/tmp/bperf.sock 00:34:44.407 18:21:33 -- common/autotest_common.sh@817 -- # '[' -z 3491209 ']' 00:34:44.407 18:21:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:44.407 18:21:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:44.407 18:21:33 -- keyring/file.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:34:44.407 18:21:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:44.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:44.407 18:21:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:44.407 18:21:33 -- common/autotest_common.sh@10 -- # set +x 00:34:44.407 18:21:33 -- keyring/file.sh@112 -- # echo '{ 00:34:44.407 "subsystems": [ 00:34:44.407 { 00:34:44.407 "subsystem": "keyring", 00:34:44.407 "config": [ 00:34:44.407 { 00:34:44.407 "method": "keyring_file_add_key", 00:34:44.407 "params": { 00:34:44.407 "name": "key0", 00:34:44.407 "path": "/tmp/tmp.cMdFsXpaDi" 00:34:44.407 } 00:34:44.407 }, 00:34:44.407 { 00:34:44.407 "method": "keyring_file_add_key", 00:34:44.407 "params": { 00:34:44.407 "name": "key1", 00:34:44.407 "path": "/tmp/tmp.DYureLPpg0" 00:34:44.407 } 00:34:44.407 } 00:34:44.407 ] 00:34:44.407 }, 00:34:44.407 { 00:34:44.407 "subsystem": "iobuf", 00:34:44.407 "config": [ 00:34:44.407 { 00:34:44.407 "method": "iobuf_set_options", 00:34:44.407 "params": { 00:34:44.407 "small_pool_count": 8192, 00:34:44.407 "large_pool_count": 1024, 00:34:44.407 "small_bufsize": 8192, 00:34:44.407 "large_bufsize": 135168 00:34:44.407 } 00:34:44.407 } 00:34:44.407 ] 00:34:44.407 }, 00:34:44.407 { 00:34:44.407 "subsystem": "sock", 00:34:44.407 "config": [ 00:34:44.407 { 00:34:44.407 "method": "sock_impl_set_options", 00:34:44.407 "params": { 00:34:44.407 "impl_name": "posix", 00:34:44.407 "recv_buf_size": 2097152, 00:34:44.407 "send_buf_size": 2097152, 00:34:44.407 "enable_recv_pipe": true, 00:34:44.407 "enable_quickack": false, 00:34:44.407 "enable_placement_id": 0, 00:34:44.407 "enable_zerocopy_send_server": true, 00:34:44.407 "enable_zerocopy_send_client": false, 00:34:44.407 "zerocopy_threshold": 0, 00:34:44.407 "tls_version": 0, 00:34:44.407 "enable_ktls": false 00:34:44.407 } 00:34:44.407 }, 00:34:44.407 { 00:34:44.407 "method": "sock_impl_set_options", 00:34:44.407 "params": { 00:34:44.407 "impl_name": "ssl", 00:34:44.407 "recv_buf_size": 4096, 00:34:44.407 "send_buf_size": 4096, 00:34:44.407 "enable_recv_pipe": true, 00:34:44.407 "enable_quickack": false, 00:34:44.407 "enable_placement_id": 0, 00:34:44.407 "enable_zerocopy_send_server": true, 00:34:44.407 "enable_zerocopy_send_client": false, 00:34:44.407 "zerocopy_threshold": 0, 00:34:44.407 "tls_version": 0, 00:34:44.407 "enable_ktls": false 00:34:44.407 } 00:34:44.407 } 00:34:44.407 ] 00:34:44.407 }, 00:34:44.407 { 00:34:44.407 "subsystem": "vmd", 00:34:44.407 "config": [] 00:34:44.407 }, 00:34:44.407 { 00:34:44.407 "subsystem": "accel", 00:34:44.407 "config": [ 00:34:44.407 { 00:34:44.407 "method": "accel_set_options", 00:34:44.407 "params": { 00:34:44.407 "small_cache_size": 128, 00:34:44.407 "large_cache_size": 16, 00:34:44.407 "task_count": 2048, 00:34:44.407 "sequence_count": 2048, 00:34:44.407 "buf_count": 2048 00:34:44.407 } 00:34:44.407 } 00:34:44.407 ] 00:34:44.407 }, 00:34:44.407 { 00:34:44.407 "subsystem": "bdev", 00:34:44.407 "config": [ 00:34:44.407 { 00:34:44.407 "method": "bdev_set_options", 00:34:44.407 "params": { 00:34:44.407 "bdev_io_pool_size": 65535, 00:34:44.407 "bdev_io_cache_size": 256, 00:34:44.407 "bdev_auto_examine": true, 00:34:44.407 "iobuf_small_cache_size": 128, 00:34:44.407 "iobuf_large_cache_size": 16 00:34:44.407 } 00:34:44.407 }, 00:34:44.407 { 00:34:44.407 "method": "bdev_raid_set_options", 00:34:44.407 "params": { 00:34:44.407 "process_window_size_kb": 1024 00:34:44.407 } 00:34:44.407 }, 00:34:44.407 { 00:34:44.407 "method": "bdev_iscsi_set_options", 00:34:44.407 "params": { 00:34:44.407 "timeout_sec": 30 00:34:44.407 } 00:34:44.407 }, 00:34:44.407 { 00:34:44.407 "method": "bdev_nvme_set_options", 00:34:44.407 "params": { 00:34:44.407 "action_on_timeout": "none", 00:34:44.407 "timeout_us": 0, 00:34:44.407 "timeout_admin_us": 0, 00:34:44.407 "keep_alive_timeout_ms": 10000, 00:34:44.407 "arbitration_burst": 0, 00:34:44.407 "low_priority_weight": 0, 00:34:44.407 "medium_priority_weight": 0, 00:34:44.407 "high_priority_weight": 0, 00:34:44.407 "nvme_adminq_poll_period_us": 10000, 00:34:44.407 "nvme_ioq_poll_period_us": 0, 00:34:44.407 "io_queue_requests": 512, 00:34:44.407 "delay_cmd_submit": true, 00:34:44.407 "transport_retry_count": 4, 00:34:44.407 "bdev_retry_count": 3, 00:34:44.407 "transport_ack_timeout": 0, 00:34:44.407 "ctrlr_loss_timeout_sec": 0, 00:34:44.407 "reconnect_delay_sec": 0, 00:34:44.407 "fast_io_fail_timeout_sec": 0, 00:34:44.407 "disable_auto_failback": false, 00:34:44.407 "generate_uuids": false, 00:34:44.407 "transport_tos": 0, 00:34:44.407 "nvme_error_stat": false, 00:34:44.407 "rdma_srq_size": 0, 00:34:44.407 "io_path_stat": false, 00:34:44.407 "allow_accel_sequence": false, 00:34:44.407 "rdma_max_cq_size": 0, 00:34:44.407 "rdma_cm_event_timeout_ms": 0, 00:34:44.407 "dhchap_digests": [ 00:34:44.407 "sha256", 00:34:44.407 "sha384", 00:34:44.407 "sha512" 00:34:44.407 ], 00:34:44.407 "dhchap_dhgroups": [ 00:34:44.407 "null", 00:34:44.407 "ffdhe2048", 00:34:44.407 "ffdhe3072", 00:34:44.407 "ffdhe4096", 00:34:44.407 "ffdhe6144", 00:34:44.407 "ffdhe8192" 00:34:44.407 ] 00:34:44.407 } 00:34:44.407 }, 00:34:44.407 { 00:34:44.407 "method": "bdev_nvme_attach_controller", 00:34:44.407 "params": { 00:34:44.407 "name": "nvme0", 00:34:44.407 "trtype": "TCP", 00:34:44.407 "adrfam": "IPv4", 00:34:44.407 "traddr": "127.0.0.1", 00:34:44.407 "trsvcid": "4420", 00:34:44.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:44.407 "prchk_reftag": false, 00:34:44.407 "prchk_guard": false, 00:34:44.407 "ctrlr_loss_timeout_sec": 0, 00:34:44.407 "reconnect_delay_sec": 0, 00:34:44.407 "fast_io_fail_timeout_sec": 0, 00:34:44.407 "psk": "key0", 00:34:44.407 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:44.407 "hdgst": false, 00:34:44.407 "ddgst": false 00:34:44.407 } 00:34:44.407 }, 00:34:44.407 { 00:34:44.407 "method": "bdev_nvme_set_hotplug", 00:34:44.407 "params": { 00:34:44.407 "period_us": 100000, 00:34:44.407 "enable": false 00:34:44.407 } 00:34:44.407 }, 00:34:44.407 { 00:34:44.407 "method": "bdev_wait_for_examine" 00:34:44.407 } 00:34:44.407 ] 00:34:44.407 }, 00:34:44.407 { 00:34:44.408 "subsystem": "nbd", 00:34:44.408 "config": [] 00:34:44.408 } 00:34:44.408 ] 00:34:44.408 }' 00:34:44.408 [2024-04-15 18:21:33.206798] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:34:44.408 [2024-04-15 18:21:33.206899] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491209 ] 00:34:44.408 EAL: No free 2048 kB hugepages reported on node 1 00:34:44.408 [2024-04-15 18:21:33.277137] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.666 [2024-04-15 18:21:33.374406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:44.666 [2024-04-15 18:21:33.556010] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:45.599 18:21:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:45.599 18:21:34 -- common/autotest_common.sh@850 -- # return 0 00:34:45.599 18:21:34 -- keyring/file.sh@117 -- # bperf_cmd keyring_get_keys 00:34:45.599 18:21:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:45.599 18:21:34 -- keyring/file.sh@117 -- # jq length 00:34:45.861 18:21:34 -- keyring/file.sh@117 -- # (( 2 == 2 )) 00:34:45.861 18:21:34 -- keyring/file.sh@118 -- # get_refcnt key0 00:34:45.861 18:21:34 -- keyring/common.sh@12 -- # get_key key0 00:34:45.861 18:21:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:45.861 18:21:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:45.861 18:21:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:45.861 18:21:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:46.153 18:21:34 -- keyring/file.sh@118 -- # (( 2 == 2 )) 00:34:46.153 18:21:34 -- keyring/file.sh@119 -- # get_refcnt key1 00:34:46.153 18:21:34 -- keyring/common.sh@12 -- # get_key key1 00:34:46.153 18:21:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:46.153 18:21:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:46.153 18:21:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:46.153 18:21:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:46.719 18:21:35 -- keyring/file.sh@119 -- # (( 1 == 1 )) 00:34:46.719 18:21:35 -- keyring/file.sh@120 -- # bperf_cmd bdev_nvme_get_controllers 00:34:46.719 18:21:35 -- keyring/file.sh@120 -- # jq -r '.[].name' 00:34:46.719 18:21:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:34:46.977 18:21:35 -- keyring/file.sh@120 -- # [[ nvme0 == nvme0 ]] 00:34:46.977 18:21:35 -- keyring/file.sh@1 -- # cleanup 00:34:46.977 18:21:35 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.cMdFsXpaDi /tmp/tmp.DYureLPpg0 00:34:46.977 18:21:35 -- keyring/file.sh@20 -- # killprocess 3491209 00:34:46.977 18:21:35 -- common/autotest_common.sh@936 -- # '[' -z 3491209 ']' 00:34:46.977 18:21:35 -- common/autotest_common.sh@940 -- # kill -0 3491209 00:34:46.977 18:21:35 -- common/autotest_common.sh@941 -- # uname 00:34:46.977 18:21:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:46.977 18:21:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3491209 00:34:47.235 18:21:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:34:47.235 18:21:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:34:47.235 18:21:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3491209' 00:34:47.235 killing process with pid 3491209 00:34:47.235 18:21:35 -- common/autotest_common.sh@955 -- # kill 3491209 00:34:47.235 Received shutdown signal, test time was about 1.000000 seconds 00:34:47.235 00:34:47.235 Latency(us) 00:34:47.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:47.236 =================================================================================================================== 00:34:47.236 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:47.236 18:21:35 -- common/autotest_common.sh@960 -- # wait 3491209 00:34:47.236 18:21:36 -- keyring/file.sh@21 -- # killprocess 3489345 00:34:47.236 18:21:36 -- common/autotest_common.sh@936 -- # '[' -z 3489345 ']' 00:34:47.236 18:21:36 -- common/autotest_common.sh@940 -- # kill -0 3489345 00:34:47.236 18:21:36 -- common/autotest_common.sh@941 -- # uname 00:34:47.236 18:21:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:47.493 18:21:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3489345 00:34:47.493 18:21:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:47.493 18:21:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:47.493 18:21:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3489345' 00:34:47.493 killing process with pid 3489345 00:34:47.493 18:21:36 -- common/autotest_common.sh@955 -- # kill 3489345 00:34:47.493 [2024-04-15 18:21:36.230481] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:34:47.493 18:21:36 -- common/autotest_common.sh@960 -- # wait 3489345 00:34:47.752 00:34:47.752 real 0m19.358s 00:34:47.752 user 0m49.939s 00:34:47.752 sys 0m4.063s 00:34:47.752 18:21:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:47.752 18:21:36 -- common/autotest_common.sh@10 -- # set +x 00:34:47.752 ************************************ 00:34:47.752 END TEST keyring_file 00:34:47.752 ************************************ 00:34:47.752 18:21:36 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:34:47.752 18:21:36 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:34:47.752 18:21:36 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:34:47.752 18:21:36 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:34:47.752 18:21:36 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:34:47.752 18:21:36 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:34:47.752 18:21:36 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:34:47.752 18:21:36 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:34:47.752 18:21:36 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:34:47.752 18:21:36 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:34:47.752 18:21:36 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:47.752 18:21:36 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:34:47.752 18:21:36 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:34:47.752 18:21:36 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:34:47.752 18:21:36 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:34:47.752 18:21:36 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:34:47.752 18:21:36 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:34:47.752 18:21:36 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:34:47.752 18:21:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:34:47.752 18:21:36 -- common/autotest_common.sh@10 -- # set +x 00:34:47.752 18:21:36 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:34:47.752 18:21:36 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:34:47.752 18:21:36 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:34:47.752 18:21:36 -- common/autotest_common.sh@10 -- # set +x 00:34:50.283 INFO: APP EXITING 00:34:50.283 INFO: killing all VMs 00:34:50.283 INFO: killing vhost app 00:34:50.283 INFO: EXIT DONE 00:34:51.656 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:34:51.656 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:34:51.656 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:34:51.656 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:34:51.656 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:34:51.656 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:34:51.656 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:34:51.656 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:34:51.656 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:34:51.656 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:34:51.656 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:34:51.656 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:34:51.656 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:34:51.656 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:34:51.656 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:34:51.656 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:34:51.913 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:34:53.288 Cleaning 00:34:53.288 Removing: /var/run/dpdk/spdk0/config 00:34:53.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:53.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:53.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:53.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:53.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:53.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:53.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:53.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:53.288 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:53.288 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:53.288 Removing: /var/run/dpdk/spdk1/config 00:34:53.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:53.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:53.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:53.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:53.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:53.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:53.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:53.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:53.288 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:53.288 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:53.288 Removing: /var/run/dpdk/spdk1/mp_socket 00:34:53.288 Removing: /var/run/dpdk/spdk2/config 00:34:53.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:53.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:53.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:53.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:53.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:53.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:53.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:53.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:53.288 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:53.288 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:53.288 Removing: /var/run/dpdk/spdk3/config 00:34:53.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:53.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:53.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:53.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:53.546 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:53.546 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:53.546 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:53.546 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:53.546 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:53.546 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:53.546 Removing: /var/run/dpdk/spdk4/config 00:34:53.546 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:53.546 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:53.546 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:53.546 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:53.546 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:53.546 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:53.546 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:53.546 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:53.546 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:53.546 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:53.546 Removing: /dev/shm/bdev_svc_trace.1 00:34:53.546 Removing: /dev/shm/nvmf_trace.0 00:34:53.546 Removing: /dev/shm/spdk_tgt_trace.pid3195355 00:34:53.546 Removing: /var/run/dpdk/spdk0 00:34:53.546 Removing: /var/run/dpdk/spdk1 00:34:53.546 Removing: /var/run/dpdk/spdk2 00:34:53.546 Removing: /var/run/dpdk/spdk3 00:34:53.546 Removing: /var/run/dpdk/spdk4 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3193639 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3194395 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3195355 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3195965 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3196662 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3196802 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3197628 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3199208 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3200132 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3200453 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3200647 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3200981 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3201189 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3201357 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3201518 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3201831 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3202381 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3205192 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3205360 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3205536 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3205586 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3205976 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3206113 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3206549 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3206561 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3206861 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3207017 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3207255 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3207277 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3207799 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3207959 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3208523 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3208962 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3209005 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3209209 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3209391 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3209657 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3209823 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3210023 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3210266 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3210423 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3210661 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3210872 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3211037 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3211316 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3211482 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3211649 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3211930 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3212094 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3212255 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3212538 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3212703 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3212954 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3213150 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3213317 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3213513 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3213869 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3216076 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3269942 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3272708 00:34:53.546 Removing: /var/run/dpdk/spdk_pid3278555 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3281872 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3284363 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3284764 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3292332 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3292339 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3292983 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3293651 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3294184 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3294581 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3294592 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3294844 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3294988 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3294990 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3295642 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3296215 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3296943 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3297332 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3297340 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3297752 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3299125 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3299849 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3305246 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3305424 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3308326 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3312182 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3314359 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3320921 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3326282 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3327472 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3328143 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3339398 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3341640 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3344695 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3345751 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3347076 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3347216 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3347361 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3347497 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3348068 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3349384 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3350394 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3350825 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3352566 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3352995 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3353504 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3355962 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3359367 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3363519 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3386474 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3389206 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3393627 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3394711 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3395795 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3398366 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3400750 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3405137 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3405140 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3408071 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3408316 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3408448 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3408715 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3408730 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3409922 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3411094 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3412269 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3413440 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3414604 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3415777 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3419472 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3419807 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3421046 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3422028 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3425619 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3427601 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3431161 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3434358 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3438836 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3438838 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3451469 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3452007 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3452539 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3452951 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3453660 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3454210 00:34:53.805 Removing: /var/run/dpdk/spdk_pid3454720 00:34:53.806 Removing: /var/run/dpdk/spdk_pid3455735 00:34:53.806 Removing: /var/run/dpdk/spdk_pid3458277 00:34:53.806 Removing: /var/run/dpdk/spdk_pid3458464 00:34:53.806 Removing: /var/run/dpdk/spdk_pid3462231 00:34:53.806 Removing: /var/run/dpdk/spdk_pid3462407 00:34:53.806 Removing: /var/run/dpdk/spdk_pid3464045 00:34:53.806 Removing: /var/run/dpdk/spdk_pid3469112 00:34:53.806 Removing: /var/run/dpdk/spdk_pid3469183 00:34:53.806 Removing: /var/run/dpdk/spdk_pid3472109 00:34:53.806 Removing: /var/run/dpdk/spdk_pid3473507 00:34:54.064 Removing: /var/run/dpdk/spdk_pid3474920 00:34:54.064 Removing: /var/run/dpdk/spdk_pid3475657 00:34:54.064 Removing: /var/run/dpdk/spdk_pid3477068 00:34:54.064 Removing: /var/run/dpdk/spdk_pid3477948 00:34:54.064 Removing: /var/run/dpdk/spdk_pid3483280 00:34:54.064 Removing: /var/run/dpdk/spdk_pid3483645 00:34:54.064 Removing: /var/run/dpdk/spdk_pid3484038 00:34:54.064 Removing: /var/run/dpdk/spdk_pid3485652 00:34:54.064 Removing: /var/run/dpdk/spdk_pid3485990 00:34:54.064 Removing: /var/run/dpdk/spdk_pid3486384 00:34:54.064 Removing: /var/run/dpdk/spdk_pid3489345 00:34:54.064 Removing: /var/run/dpdk/spdk_pid3489352 00:34:54.064 Removing: /var/run/dpdk/spdk_pid3491209 00:34:54.064 Clean 00:34:54.064 18:21:42 -- common/autotest_common.sh@1437 -- # return 0 00:34:54.064 18:21:42 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:34:54.064 18:21:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:54.064 18:21:42 -- common/autotest_common.sh@10 -- # set +x 00:34:54.064 18:21:43 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:34:54.064 18:21:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:54.064 18:21:43 -- common/autotest_common.sh@10 -- # set +x 00:34:54.322 18:21:43 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:54.322 18:21:43 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:54.322 18:21:43 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:54.322 18:21:43 -- spdk/autotest.sh@389 -- # hash lcov 00:34:54.322 18:21:43 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:54.322 18:21:43 -- spdk/autotest.sh@391 -- # hostname 00:34:54.322 18:21:43 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:54.580 geninfo: WARNING: invalid characters removed from testname! 00:35:33.280 18:22:18 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:35.212 18:22:23 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:38.494 18:22:27 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:43.756 18:22:31 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:47.036 18:22:35 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:51.219 18:22:39 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:54.497 18:22:43 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:54.755 18:22:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:54.755 18:22:43 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:54.755 18:22:43 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:54.755 18:22:43 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:54.755 18:22:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.755 18:22:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.755 18:22:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.755 18:22:43 -- paths/export.sh@5 -- $ export PATH 00:35:54.756 18:22:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.756 18:22:43 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:35:54.756 18:22:43 -- common/autobuild_common.sh@435 -- $ date +%s 00:35:54.756 18:22:43 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713198163.XXXXXX 00:35:54.756 18:22:43 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713198163.tiEqzY 00:35:54.756 18:22:43 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:35:54.756 18:22:43 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:35:54.756 18:22:43 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:35:54.756 18:22:43 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:35:54.756 18:22:43 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:35:54.756 18:22:43 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:35:54.756 18:22:43 -- common/autobuild_common.sh@451 -- $ get_config_params 00:35:54.756 18:22:43 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:35:54.756 18:22:43 -- common/autotest_common.sh@10 -- $ set +x 00:35:54.756 18:22:43 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:35:54.756 18:22:43 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:35:54.756 18:22:43 -- pm/common@17 -- $ local monitor 00:35:54.756 18:22:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:54.756 18:22:43 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3501928 00:35:54.756 18:22:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:54.756 18:22:43 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3501930 00:35:54.756 18:22:43 -- pm/common@21 -- $ date +%s 00:35:54.756 18:22:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:54.756 18:22:43 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3501932 00:35:54.756 18:22:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:54.756 18:22:43 -- pm/common@21 -- $ date +%s 00:35:54.756 18:22:43 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3501935 00:35:54.756 18:22:43 -- pm/common@21 -- $ date +%s 00:35:54.756 18:22:43 -- pm/common@26 -- $ sleep 1 00:35:54.756 18:22:43 -- pm/common@21 -- $ date +%s 00:35:54.756 18:22:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713198163 00:35:54.756 18:22:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713198163 00:35:54.756 18:22:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713198163 00:35:54.756 18:22:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713198163 00:35:54.756 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713198163_collect-bmc-pm.bmc.pm.log 00:35:54.756 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713198163_collect-cpu-temp.pm.log 00:35:54.756 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713198163_collect-vmstat.pm.log 00:35:54.756 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713198163_collect-cpu-load.pm.log 00:35:55.688 18:22:44 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:35:55.688 18:22:44 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:35:55.688 18:22:44 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:55.688 18:22:44 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:35:55.688 18:22:44 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:35:55.688 18:22:44 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:35:55.688 18:22:44 -- spdk/autopackage.sh@19 -- $ timing_finish 00:35:55.688 18:22:44 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:55.688 18:22:44 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:35:55.688 18:22:44 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:55.688 18:22:44 -- spdk/autopackage.sh@20 -- $ exit 0 00:35:55.688 18:22:44 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:35:55.688 18:22:44 -- pm/common@30 -- $ signal_monitor_resources TERM 00:35:55.688 18:22:44 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:35:55.688 18:22:44 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:55.688 18:22:44 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:35:55.688 18:22:44 -- pm/common@45 -- $ pid=3501958 00:35:55.688 18:22:44 -- pm/common@52 -- $ sudo kill -TERM 3501958 00:35:55.688 18:22:44 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:55.688 18:22:44 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:35:55.688 18:22:44 -- pm/common@45 -- $ pid=3501960 00:35:55.688 18:22:44 -- pm/common@52 -- $ sudo kill -TERM 3501960 00:35:55.946 18:22:44 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:55.946 18:22:44 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:35:55.946 18:22:44 -- pm/common@45 -- $ pid=3501955 00:35:55.946 18:22:44 -- pm/common@52 -- $ sudo kill -TERM 3501955 00:35:55.946 18:22:44 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:55.946 18:22:44 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:35:55.946 18:22:44 -- pm/common@45 -- $ pid=3501956 00:35:55.946 18:22:44 -- pm/common@52 -- $ sudo kill -TERM 3501956 00:35:55.946 + [[ -n 3083615 ]] 00:35:55.946 + sudo kill 3083615 00:35:55.961 [Pipeline] } 00:35:55.979 [Pipeline] // stage 00:35:55.984 [Pipeline] } 00:35:56.004 [Pipeline] // timeout 00:35:56.010 [Pipeline] } 00:35:56.023 [Pipeline] // catchError 00:35:56.028 [Pipeline] } 00:35:56.044 [Pipeline] // wrap 00:35:56.049 [Pipeline] } 00:35:56.065 [Pipeline] // catchError 00:35:56.074 [Pipeline] stage 00:35:56.076 [Pipeline] { (Epilogue) 00:35:56.090 [Pipeline] catchError 00:35:56.092 [Pipeline] { 00:35:56.107 [Pipeline] echo 00:35:56.108 Cleanup processes 00:35:56.113 [Pipeline] sh 00:35:56.444 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:56.444 3502083 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:35:56.444 3502218 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:56.456 [Pipeline] sh 00:35:56.733 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:56.733 ++ grep -v 'sudo pgrep' 00:35:56.733 ++ awk '{print $1}' 00:35:56.733 + sudo kill -9 3502083 00:35:56.746 [Pipeline] sh 00:35:57.025 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:09.229 [Pipeline] sh 00:36:09.507 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:09.765 Artifacts sizes are good 00:36:09.781 [Pipeline] archiveArtifacts 00:36:09.788 Archiving artifacts 00:36:10.207 [Pipeline] sh 00:36:10.494 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:11.077 [Pipeline] cleanWs 00:36:11.086 [WS-CLEANUP] Deleting project workspace... 00:36:11.086 [WS-CLEANUP] Deferred wipeout is used... 00:36:11.093 [WS-CLEANUP] done 00:36:11.095 [Pipeline] } 00:36:11.114 [Pipeline] // catchError 00:36:11.126 [Pipeline] sh 00:36:11.408 + logger -p user.info -t JENKINS-CI 00:36:11.418 [Pipeline] } 00:36:11.433 [Pipeline] // stage 00:36:11.439 [Pipeline] } 00:36:11.456 [Pipeline] // node 00:36:11.462 [Pipeline] End of Pipeline 00:36:11.493 Finished: SUCCESS